Subscribe to:

Subscribe to :: TheGuruReview.net ::

Mobile World Congress Coming To USA In 2017

June 24, 2016 by mphillips  
Filed under Mobile

Mobile World Congress, considered by many experts as the most important tech trade show in the world, is coming to the U.S. Trade groups GSMA and CTIA are joining forces to bring a smaller version of the event to the U.S. in 2017.

GSMA Mobile World Congress Americas will debut Sept. 12 to 14, 2017, in San Francisco and will replace U.S. trade group CTIA’s Super Mobility conference. Super Mobility will continue this year in Las Vegas from Sept. 7 to 9.

The new conference will be the “first truly global wireless event” in the Americas, CTIA President and CEO Meredith Attwell Baker said in a statement.

The new trade show, however, will apparently be more focused, spotlighting the leading innovations from the North American mobile industry, John Hofman, CEO of GSMA, said in a statement.

The trade groups expect about 30,000 attendees and 1,000 exhibitors at the 2017 trade show, similar to the numbers from CTIA’s Super Mobility conference.

GSMA’s Mobile World Congress in Barcelona, Spain, earlier this year drew more than 100,000 attendees and 2,200 exhibitors. The 2017 Barcelona event will take place from Feb. 27 to March 2.

The new Mobile World Congress Americas will feature C-level speakers, exhibits featuring the latest mobile technologies, and a regulatory and public policy program.

 

 

Google Fiber Acquires Webpass, Increases Urban Coverage

June 24, 2016 by mphillips  
Filed under Around The Net

Google Fiber has agreed to purchase Internet service provider Webpass to quickly increase its urban coverage and offer customers a combination of fiber and wireless delivery of high-speed Internet.

For Google Fiber, which has typically worked with cities in planning and building a fiber network from scratch, the acquisition will give the Alphabet business a headstart in many markets, particularly in dense urban areas.

Financial terms of the acquisition were not disclosed. Google did not immediately comment on the acquisition.

Webpass in San Francisco owns and operates its Ethernet network, thus removing its dependence on phone and cable companies. It has operations in San Francisco, Oakland, Emeryville, Berkeley, San Diego, Miami, Miami Beach, Coral Gables, Chicago and Boston. The company offers business connections from 10 to 1,000 Mbps and to residential customers service from 100 Mbps to 1Gbps.

Google is already working in San Francisco, where Webpass also operates, and is negotiating with property owners and managers in buildings near existing fiber infrastructure to explore connecting their residents to gigabit Internet.

Webpass will help to further expand that coverage as it will remain focused on the rapid deployment of high-speed Internet connections for residential and commercial buildings, mainly using point-to-point wireless, Webpass President Charles Barr said in a blog post Wednesday that announced the proposed acquisition.

“Google Fiber’s resources will enable Webpass to grow faster and reach many more customers than we could as a standalone company,” Barr wrote.

 

 

 

Will Intel Skylake-X 10 Debut In Q2 2017?

June 24, 2016 by Michael  
Filed under Computing

There have been few details released on the new 10 core Skylake-X which will replace the Intel Core i7-6950X Extreme Edition which has just been released.

Intel has just released its Broadwell-E generation of ten, eight and six cores with Intel Core i7-6950X Extreme Edition being the fastest end the most expensive.  But we have managed to get a few details about its replacement – the Skylake-X.

You can expect two SKUs, 140W X versions with 10 cores and the one will less cores called the K version. The new Extreme edition CPUs will have the new R4 socket. This new socket is also called LGA2066 some 55 higher than with the existing socket number.

There will be a Kaby Lake-X 4 core processor with 95+ W TDP using the same LGA 2066 R4 socket . Both Skylake-X and Kaby Lake-X support the new chipset that is known as Kaby Lake-X.

This is 200 series chipset will be the successor of the Skylake 100. The new chipset will come with up to 24 PCIe 3.0 PCI express lanes. In fact this is the only major difference in the chipset. It does support Octane storage techhonlogy, something that 100 series chipset cannot. The Kaby Lake 200 series chipset supports 6 SATA 3.0 ports, up to 10 USB 3.0 ports, DMI 3.0, up to three 4Xports for PCIe 3.0 drives.

If the Zen desktop core gives Intel some serious competition, we bet that Intel won’t charge $1700 for its highest end overclockable desktop CPU. Zen is still at least few months away, we expect it at late 2016 at best.

Courtesy-Fud

 

MIT Develops Swarm Chip For Increased Performance

June 24, 2016 by Michael  
Filed under Uncategorized

Researchers at MIT have figured out how to make chips faster without piling on more code.

They have managed this with a chip design called Swarm that makes running programs across processors with multiple cores more efficient and easier to write. It involved some pretty complex stuff, so grab a tea, pop some Aspirins and drown out your colleagues’ chatter as we try to explain.

MIT said that a program running on 64-core processors should, in theory, be 64 times faster than it would be on a single-core machine.

But most computers run programs in a sequence of commands, for example; wake up, get dressed, make breakfast. Splitting these commands into chunks to run across multiple cores raises complications in ensuring that everything syncs up, meaning that real world performance doesn’t match theoretical performance.

Creating these chunks of code for parallel running also involves a lot of complex lines of code, which takes time to write.

The MIT boffins came up with Swarm, algorithms that better synchronise the parallel running of programs across multiple cores to yield three to 18 times faster performance with only a tenth of the code, or in some cases less.

“Multi-core systems are really hard to program,” explained project lead Daniel Sanchez, an assistant professor in MIT’s Department of Electrical Engineering and Computer Science.

“You have to explicitly divide the work that you’re doing into tasks, and then you need to enforce some synchronisation between tasks accessing shared data. What this architecture does, essentially, is remove all sorts of explicit synchronisation to make parallel programming much easier.”

Many complex applications use the exploration of abstract data known in computer science as graphs. Think of something that visually resembles a cross between a spider’s web and the scientific diagrams of the structure of molecules.

These graphs are made up of abstract objects known as nodes, generally shown as circles connected by lines commonly called edges. These edges are frequently associated with numbers called weights that connect and relate to the relationship between different nodes, for example, the distance between two locations.

Using the prior example, a computer program would use an algorithm to explore the different edges and nodes and then serve someone with the fastest route to take from one location to another.

In sequential and parallel algorithm running, this throws up the problem of an algorithm exploring a load of irrelevant data before it gets the answer it’s looking for, which slows real-world performance.

The way around this is to create algorithms that prioritise various bits of graph exploration, for example, one could explore the edges with lower weights or the nodes with the lower number of edges. And here’s where Swarm comes in.

Swarm chips stand out from other multi-core chips by having extra circuitry dedicated to handling prioritisation tasks. The use of time stamps sets tasks in order of priority and spreads them across multiple cores.

One extra circuit is called a Bloom filter, which effectively manages the time stamps to prevent any conflicts with memory data access. The filter is implemented into another circuit, which records all the memory addresses of the data on which its cores are working.

These time stamps, coupled with extra circuitry on a Swarm chip, enforce better synchronisation of tasks on a multi-core chip by ensuring that a core with a later time stamp can’t overwrite the work of a core with an earlier time stamp still executing in the memory, thereby avoiding conflicts in data access. At the same time, this technique results in a hike in performance.

We are not likely to see any Swarm chips in circulation any time soon, given that Intel already has its chip architecture planned out, but MIT’s research lays out an architecture that future chips could take to get the most out of their many cores.

If that hasn’t exploded your head, you can read all about MIT’s research into using graphene to make chips up to a million times faster than they are today.

Courtesy-TheInq

 

Astronomers Find Extremely Young Exoplanet

June 24, 2016 by Michael  
Filed under Around The Net

A distant, Neptune-size planet 500 light-years from Earth appears to be the youngest fully formed exoplanet ever found crossing its star, raising questions about how it formed so close, so quickly.

Researchers first found the planet, which whisks around its star every five days, using the Kepler space telescope currently orbiting Earth. Its star is only 5 million to 10 million years old, suggesting that the planet is a similar age — incredibly young, on a cosmic scale. Researchers said it was the youngest planet spotted fully formed around a distant star, and it is nearly 10 times closer to its star than Mercury is to the sun.

“Our Earth is roughly 4.5 billion years old,” Trevor David, a graduate student researcher at the California Institute of Technology and lead author of the new study, said in a statement. “By comparison, the planet K2-33b is very young. You might think of it as an infant.”

Most of the more than 3,000 confirmed planets around other stars orbit planets more than 1 billion years old, NASA Jet Propulsion Lab officials said in the statement — so this young star and planet pair offers a rare opportunity to see earlier stages of planet development.

Kepler detected the planet during its K2 mission by catching the star dimming and brightening periodically as the planet passed in front of it — a detection process known as the transit method. Researchers used data from the Keck Observatory in Hawaii and NASA’s Spitzer Space Telescope, in orbit around Earth, to verify that the darkening was caused by the planet and to see that the star is surrounded by a thin layer of debris.

That layer is likely the remnant of a thick disk of debris that encircled the star when it first formed — the raw material from which planetary systems form. In this case, the thin disk suggests the star is near the end of its planet-forming days, the researchers said in the study, released today (June 20) in the journal Nature.

“Initially, this material may obscure any forming planets, but after a few million years, the dust starts to dissipate,” Ann Marie Cody, a postdoctoral researcher at NASA’s Ames Research Center in California, said in the statement. “It is during this time window that we can begin to detect the signatures of youthful planets in K2.”

Combined with its youth, the planet’s close proximity to its star is a puzzling feature of the newly found system, the researchers said. Some astronomical theories suggest that a planet of its mass would have to form farther out and slowly migrate inward over hundreds of millions of years, but the star is too young for a process that long to have occurred, the researchers said in the statement.

Instead, it must have either migrated much more quickly, in a process called disk migration powered by the orbiting disk of gas and debris, or formed right at the spot that researchers see it in now.

“After the first discoveries of massive exoplanets on close orbits about 20 years ago, it was immediately suggested that they could absolutely not have formed there,” David said. “But in the past several years, some momentum has grown for in situ formation theories [that the planet could form right where it is], so the idea is not as wild as it once seemed.”

“The question we are answering is, did those planets take a long time to get into those hot orbits, or could they have been there from a very early stage? We are saying, at least in this one case, that they can indeed be there at a very early stage,” he added.

The planet K2-33b is one of two newborn-planet announcements published in today’s issue of Nature. The other newborn planet, which orbits a 2-million-year-old star called V830 Tau located 430 light-years away, appears to be a giant planet near the size of Jupiter sitting in an orbit one-twentieth the distance from Earth to the sun. The researchers identified the planet by watching its star wobble back and forth periodically as the massive planet orbited. If that planet formed farther outward and migrated closer, it would have had to rush in at a very early stage of its formation.

Courtesy-Space

 

Facebook Gearing Up For Live Streaming

June 23, 2016 by mphillips  
Filed under Around The Net

Facebook Inc has inked deals worth more than $50 million with media giants and celebrities to create videos for its live-streaming service, the Wall Street Journal reported.

Facebook has signed nearly 140 deals, including with CNN, the New York Times, Vox Media, Tastemade, Mashable and the Huffington Post, the Journal reported on Tuesday, citing a document.

Comedian Kevin Hart, celebrity chef Gordon Ramsay, wellness guru Deepak Chopra and NFL quarterback Russell Wilson are among the celebrities that Facebook has partnered with.

“We have an early beta program for a relatively small number partners that includes a broad range of content types from regions around the world,” Justin Osofsky, the vice president of global operations and media partnerships at Facebook, said in an email.

“We wanted to invite a broad set of partners so we could get feedback from a variety of different organization about what works and what doesn’t.”

The document shows that Facebook’s deal with online publisher BuzzFeed has the highest value at $3.05 million, the Journal said, followed by the New York Times at $3.03 million and CNN at $2.5 million.

 

Intel Debuts New Xeon Family Member

June 23, 2016 by Michael  
Filed under Computing

Intel released yet another Xeon processor family designed specifically for 4 socket servers used in next-gen data centers and cloud deployments.

Intel said that servers with the Intel Xeon E5-4600 v4 family can now have up to 22 cores and 44 threads for enough processing power for most scale-out and large workloads.

Intel’s new range is claimed to be 2.6x better than previous generations Along with the performance boost and the higher core and thread count, the new E5-4600 v4 family can provide up to 55MB of last-level cache, support up to 6TB of DDR4 2400 memory, and 40 lanes of PCIe 3.0.

The new processors have AES encryption and fast public key (RSA) encryption along with a strong random number generation that enables hardened, pervasive data protection without impact to application response times. The family is supplied with Intel’s Intelligent Power technology to improve power across both the CPU and memory. The latest version of processors family supports per-core P states (PCPS) to optimize the power usage of each processor core.

The new family has what Chipzilla calls advanced multi-core, multi-threaded processing –  up to 22 cores (previously up to 18) and 44 threads (previously up to 36) per socket for running more and heavier workloads and higher density of VMs per server.

There is a larger cache: Up to 55MB (Previously up to 45MB) of last level cache for fast access to frequently used data. There is a faster memory with up to 48 DIMMS per four-socket server for memory-intensive applications and faster maximum memory speeds with DDR4 memory. Chipzilla claims that this gives higher performance for demanding workloads.

It has Optimized Intel Advanced Vector Extensions 2.0 (Intel AVX 2.0) enables applications to run at maximum “turbo” frequencies wherever possibl and IIntel Turbo Boost Technology 2.0 acceleration takes advantage of power and thermal headroom.

Flexible, high-performance hardware-enhanced virtualization: Improve overall reliability and responsiveness through new Intel Virtualization Technology features, including New Posted Interrupts, Page Modi cation Logging, and VM Enter/Exit latency reduction

It also has multiple rank sparing DDR4 recovery for command and address parity errors and the latest Intel Data Protection Technology.

Intel tells us that the new Intel Xeon E5-4600 v4 processor family is available now.

Courtesy-Fud

 

QLogic Gets Acquired

June 23, 2016 by Michael  
Filed under Computing

ARM Chip outfit Cavium has written a cheque for QLogic, a semiconductor firm which specialises in server and storage networking.

In what it calls a drive further into the data centre market. Cavium has entered into a definitive agreement to acquire all outstanding QLogic common stock in a deal worth approximately $1.36 billion.

The acquisition adds Qlogic’s Fibre Channel and Ethernet controllers and boards to Cavium’s line up of communications, security and general-purpose processors making Cavium a full-line supplier to data centres.

It also means that Cavium can take on storage and networking with Broadcom, Intel and Mellanox.  The deal also gives Cavium a mature software stack in storage and networking and operational savings expected to amount to $45 million a year by the end of 2017.

Both companies sell to server makers and large data centers with a customer overlap of more than 60 per cent. Qlogic’s customer base is highly concentrated with nearly 60 per cent of its business for the last several years to HP, Dell and IBM.

Courtesy-Fud

 

Twitter Acquires Machine Learning Company Magic Pony

June 22, 2016 by mphillips  
Filed under Around The Net

Twitter has been quite vocal regarding its interest in machine learning in recent years, and earlier this week the company put its money where its mouth is once again by purchasing London startup Magic Pony Technology, which has focused on visual processing.

“Magic Pony’s technology — based on research by the team to create algorithms that can understand the features of imagery — will be used to enhance our strength in live [streaming] and video and opens up a whole lot of exciting creative possibilities for Twitter,” Twitter cofounder and CEO Jack Dorsey wrote in a blog post announcing the news.

The startup’s team includes 11 Ph.Ds with expertise across computer vision, machine learning, high-performance computing and computational neuroscience, Dorsey said. They’ll join Twitter’s Cortex group, made up of engineers, data scientists and machine-learning researchers.

Terms of the deal were not disclosed.

The acquisition follows several related purchases by the social media giant, including Madbits in 2014 and Whetlab last year.

 

 

 

IBM Funds Researchers Who Create KiloCore Processor

June 22, 2016 by Michael  
Filed under Computing

Researchers at the University of California, Davis, Department of Electrical and Computer Engineering have developed 1000-core processor which will eventually be put onto the commercial market.

The team, from t developed the energy-efficient 621 million transistor “KiloCore” chip so that it could manage 1.78 trillion instructions per second and since the project has IBM’s backing it could end up in the shops soon.

Team leader Bevan Baas, professor of electrical and computer engineering said that it could be the world’s first 1,000-processor chip and it is the highest clock-rate processor ever designed in a university.

While other multiple-processor chips have been created, none exceed about 300 processors. Most of those were created for research purposes and few are sold commercially. IBM, using its 32 nm CMOS technology, fabricated the KiloCore chip and could make a production run if required.

Because each processor is independently clocked, it can shut itself down to further save energy when not needed, said graduate student Brent Bohnenstiehl, who developed the principal architecture. Cores operate at an average maximum clock frequency of 1.78 GHz, and they transfer data directly to each other rather than using a pooled memory area that can become a bottleneck for data.

The 1,000 processors can execute 115 billion instructions per second while dissipating only 0.7 Watts which mean it can be powered by a single AA battery. The KiloCore chip executes instructions more than 100 times more efficiently than a modern laptop processor.

The processor is already adapted for wireless coding/decoding, video processing, encryption, and others involving large amounts of parallel data such as scientific data applications and datacentre work.

Courtesy-Fud

 

Is IBM’s Watson Driving Cars?

June 22, 2016 by Michael  
Filed under Computing

IBM has put its Watson artificial intelligence (AI) tech into a 3D printed car, merging two tech trends to create a self-driving pseudo milk float.

The electric vehicle, dubbed Olli, can hold up to 12 people rather than a load of bottled cow juice, and can be seen on the streets of Washington DC, and soon Miami-Dade County and Las Vegas.

OK, so we know that driverless cars are pretty much a thing now, particularly as we spotted one near Google’s Mountain View HQ. But the smart thing about Olli is its use of Watson Internet of Things (IoT) for Automotive, a version of the cognitive computing tech that allows people to talk to the vehicle.

Passengers can ask Olli the evergreen ‘Are we there yet?’ and it will answer, hopefully in a mildly exasperated voice. It’s basically a bit like Knight Rider‘s KITT only less stylish and without The Hoff.

However, Olli is not just a chatty car as the Watson IoT tech helps it to learn as it ferries people around and gathers data from over 30 sensors around the chassis.

The Olli was designed and built by Local Motors. Co-founder John B Rogers Jr, who has one of the most American names we have ever written, said: “Olli with Watson acts as our entry into the world of self-driving vehicles, something we’ve been quietly working on with our co-creative community for the past year.

“We are now ready to accelerate the adoption of this technology and apply it to nearly every vehicle in our current portfolio and those in the very near future. I’m thrilled to see what our open community will do with the latest in advanced vehicle technology.”

Several tech and car companies now working on driverless cars, and the roads could become an automotive robot battleground as Google’s self-driving cars compete with Mercedes’ autonomous automobiles.

Courtesy-TheInq

 

Astronomers Find Primordial Gas In First Galaxies

June 22, 2016 by Michael  
Filed under Around The Net

Astronomers have discovered signs of oxygen in one of the universe’s first galaxies, which was born shortly after the cosmic “Dark Ages” that existed before the universe had stars, a new study finds.

The discovery — which centers on the truly ancient galaxy SXDF-NB1006-2, located about 13.1 billion light-years from Earth — could help solve the mystery of how much the first stars helped to clear the murky fog that once filled the universe, the researchers said.

Previous research suggested that, after the universe was born in the Big Bang about 13.8 billion years ago, the universe was so hot that all of the atoms that existed were split into positively charged nuclei and negatively charged electrons. This soup of electrically charged ions scattered light, preventing it from traveling freely.

Prior work suggested that, about 380,000 years after the Big Bang, the universe cooled down enough for these particles to recombine into atoms, finally allowing the first light in the cosmos — that from the Big Bang — to shine. However, after this era of recombination came the cosmic “Dark Ages”; during this epoch, there was no other light, as stars had not formed yet.

Previous research also suggested that, starting about 150 million years after the Big Bang, the universe began to emerge from the cosmic Dark Ages during a time known as reionization. During this epoch, which lasted more than a half billion years, clumps of gas collapsed enough to form the first stars and galaxies, whose intense ultraviolet light ionized and destroyed most of the neutrally charged hydrogen, splitting it to form protons and electrons.

Details about the epoch of reionization are extremely difficult to glean because they happened so long ago. To see light from such ancient times, researchers look for objects that are as far away as possible — the more distant they are, the more time their light took to get to Earth. Such distant objects are only viewable with the best telescopes available today.

Much remains unknown about the epoch of reionization, such as what the first stars were like, how the earliest galaxies formed and what sources of light caused reionization. Some prior work suggested that massive stars were mostly responsible for reionization, but other research hinted that black holes were a significant and potentially dominant culprit behind this event.

Now, by looking at an ancient galaxy, researchers may have discovered clues as to the cause of reionization.

“The galaxy we observed may be a strong light source for reionization,” study lead author Akio Inoue, an astronomer at Osaka Sangyo University in Japan, told Space.com.

Hunting for ancient galaxies with oxygen

Scientists analyzed a galaxy called SXDF-NB1006-2, located about 13.1 billion light-years from Earth. When this galaxy was discovered in 2012, it was the most distant galaxy known at that time.

Using data from the Atacama Large Millimeter/submillimeter Array (ALMA) in the Atacama Desert in Chile, the researchers saw what SXDF-NB1006-2 looked like 700 million years after the Big Bang. They focused on light from oxygen and from dust particles.

“Seeking heavy elements in the early universe is an essential approach to explore the star formation activity in that period,” Inoue said in a statement.

The scientists spotted clear signs of oxygen from SXDF-NB1006-2, the most distant oxygen detected yet. This oxygen was ionized, suggesting that this galaxy possessed a number of young, giant stars several dozen times heavier than the sun. These young stars would have also emitted intense ultraviolet light, the researchers suggested.

The scientists estimated that oxygen was 10 times less abundant in SXDF-NB1006-2 than it was in the sun. This estimate matched the research team’s simulations — only light elements such as hydrogen, helium and lithium existed when the universe was first born, while heavier elements, such as oxygen, were later forged in the hearts of stars.

However, unexpectedly, the researchers found that SXDF-NB1006-2 has two to three times less dust than simulations had predicted. This dearth of dust may have aided reionization by allowing light from that galaxy to ionize the vast amount of gas outside that galaxy, the researchers said.

“SXDF-NB1006-2 would be a prototype of the light sources responsible for the cosmic reionization,” Inoue said in a statement.

One possible explanation for the smaller amount of dust is that shock waves from supernova explosions may have destroyed it, the researchers said. Another possibility is that there may not have been much in the way of cold, dense clouds in the space between SXDF-NB1006-2′s stars, which grow in these clouds a bit like how snowflakes do in cold clouds on Earth.

This research may help to answer what caused reionization. “The source of reionization is a long-standing matter — massive stars or supermassive black holes?” Inoue said. “This galaxy seems not to have a supermassive black hole, but have a number of massive stars. So massive stars may have reionized the universe.”

The researchers are continuing to analyze SXDF-NB1006-2 with ALMA.

“Higher-resolution observations will allow us to see the distribution and motion of ionized oxygen in the galaxy and provide precious information to understand the properties of the galaxy,” study co-author Yoichi Tamura, of the University of Tokyo, said in a statement.

Courtesy-Space

 

Dell Close To Deal To Sell Software Business

June 21, 2016 by mphillips  
Filed under Computing

Buyout firm Francisco Partners and the private equity arm of activist hedge fund Elliott Management Corp are in advanced negotiations to purchase Dell Inc’s software division for more than $2 billion, three people familiar with the matter said.

Divesting the software assets will help Dell refocus its technology portfolio and bolster its balance sheet after it agreed in October to buy data storage company EMC Corp for $67 billion. EMC owns a controlling stake in VMware Inc, a cloud-based virtualization software company.

Dell is seeking to sell almost all of its software assets, including Quest Software, which helps with information technology management, as well as SonicWall, an e-mail encryption and data security provider, the people said.

Boomi, a smaller asset focusing on cloud-based software integration, will be retained by Dell, one of the people added.

An agreement between Dell and the consortium of Francisco Partners and Elliott could be reached as early as this week, the people said, cautioning that the negotiations could still end unsuccessfully.

The sources asked not to be identified because the negotiations are confidential. Dell declined to comment, while Francisco Partners and Elliott did not immediately respond to requests for comment.

A sale of Dell’s software division would free it from some of its least profitable assets and cap the program of divestitures that the Round Rock, Texas-based computer maker embarked on following its deal with EMC. EMC shareholders are due to vote on the deal with Dell on July 19.

While Elliott has sought to buy companies in the past as part of its shareholder activist campaigns, the Dell software deal would represent its first major private equity investment since it hired Isaac Kim, previously a principal at private equity firm Golden Gate Capital, last year to help expand its capacity in leveraged buyouts.

Francisco Partners focuses on private equity investments in the technology sector. It has raised about $10 billion in capital and invested in more than 150 technology companies since it was launched more than 15 years ago.

 

 

 

FAA Set To Announce Commercial Drone Rules

June 21, 2016 by mphillips  
Filed under Around The Net

The U.S. Federal Aviation Administration this week is expected to unveil rules for the commercial use of drones, but the new regulations will limit their flights to daytime and to within the line of sight of operators.

The specifics of the rules, which will allow drones weighing about 50 pounds, could come at some point today, The Wall Street Journal reported, quoting industry officials. But they are unlikely to please some proposed commercial operations of drones, which would like the aircraft to be allowed to operate at night and outside the operator’s line of sight.

The FAA in February 2015 proposed draft rules, which would allow commercial drones, also known as unmanned aircraft systems, to operate, though under restrictions such as a maximum weight of 55 pounds (25 kilograms), flight altitude of a maximum of 500 feet (152 meters) above ground level, and rules that limit flights to daylight and to the visual line of sight of the operators.

FAA Administrator Michael Huerta said in January that the much-delayed rules would be finalized by late spring. “By late spring, we plan to finalize Part 107, our small UAS rule, which will allow for routine commercial drone operations,” Huerta said at an event in May, reiterating the proposed timeline.

But Amazon.com told the FAA last year that the rules as proposed would not allow its Prime Air package delivery service to take off. Pointing out that its drones require minimal human intervention, Amazon recommended that the rules “specifically permit the operation of multiple small UAS by a single UAS operator when demonstrated that this can be done safely.”

The FAA said in May it was setting up a long-term advisory committee, led by Intel CEO Brian Krzanich, to guide it on the integration of unmanned aircraft systems in the national airspace. The FAA has already been permitting as exemptions some experimental uses of drones.

New safety rules in the Federal Aviation Administration Reauthorization Act of 2016, passed by the U.S. Senate in April, propose a pilot program to develop and test technologies to intercept or shut down drones when they are near airports. To avoid conflict between the variety of laws enacted by the states and federal regulations on drones, the bill has proposed that the FAA rules on drones get preemption over local and state laws. But some legislators  are expected to oppose the rule that will prevent the states from making laws on drones as the bill goes to the U.S. House of Representatives.

 

 

Toyota To Build Artificial Intelligence-based Driving Systems

June 21, 2016 by mphillips  
Filed under Around The Net

Toyota Motor Corp is focusing developing in the next five years driver assistance systems that integrate artificial intelligence (AI) to improve vehicle safety, the head of its advanced research division said.

Gill Pratt, CEO of recently set up Toyota Research Institute (TRI), the Japanese automaker’s research and development company that focuses on AI, said it aims to improve car safety by enabling vehicles to anticipate and avoid potential accident situations.

Toyota has said the institute will spend $1 billion over the next five years, as competition to develop self-driving cars intensifies.

Earlier this month, home rival Honda Motor Co said it was setting up a new research body which would focus on artificial intelligence, joining other global automakers which are investing in robotics research, including Ford and Volkswagen AG.

“Some of the things that are in car safety, which is a near-term priority, I’m very confident that we will have some advances come out during the next five years,” Pratt told reporters late last week in comments embargoed for Monday.

The concept of allowing vehicles to think, act and take some control from drivers to perform evasive maneuvers forms a key platform of Toyota’s efforts to produce a car which can drive automatically on highways by the 2020 Tokyo Olympics.

While currently driver assistance systems largely use image sensors to avoid obstacles including vehicles and pedestrians within the car’s lane, Pratt said TRI was looking at AI solutions to enable “the car to be evasive beyond the one lane”.

“The intelligence of the car would figure out a plan for evasive action … Essentially (it would) be like a guardian angel, pushing on the accelerators, pushing on the steering wheel, pushing on the brake in parallel with you.”

As Japanese automakers race against technology companies to develop automated vehicles, they are also grappling with a rapidly graying society, which puts future demand for private vehicle ownership at risk.

Pratt said he saw the possibility that Toyota may one day become a maker of robots to help the elderly.

Asked of the potential for Toyota to produce robots for use in the home, he said: “That’s part of what we’re exploring at TRI.”

Pratt declined to comment on a media report earlier this month that Toyota is in talks with Google’s parent company Alphabet to acquire Boston Dynamics and Schafts, both of which are robotics divisions of the technology company.