Subscribe to:

Subscribe to :: TheGuruReview.net ::

Cisco Systems, Thales Sign Deal On Cybersecurity

June 30, 2016 by mphillips  
Filed under Around The Net

French electronics group Thales looking to boost its revenues by hundreds of millions of euros in the cybersecurity field through a strategic agreement it has signed with Cisco Systems, it said on Tuesday.

“We hope that with this agreement, we will add several hundred millions of euros in the next years,” said Jean-Michel Lagarde, who heads secure communications and information systems at Thales.

“It will have a multiplier effect, as this is not only about cybersecurity, but also about secure systems for cities and airports.”

The two companies have been partners since 2010 and plan to co-develop a solution to better detect and counter cyber attacks in real time, it said.

Thales generates 500 million euros ($550 million) annually in the cybersecurity business, notably in data protection thanks to the acquisition in March of Vormetric for 375 million euros.

The jointly developed solution will be aimed first at French infrastructure providers and will then be deployed globally, Cisco and Thales said in a statement.

 

 

Is The Xbox Scorpio A Tactical Move By Microsoft

June 30, 2016 by Michael  
Filed under Gaming

While Sony wowed gamers at its E3 press conference this year with a barrage of impressive content, some would argue that it was Microsoft that made the biggest splash by choosing its press conference to announce not one, but two distinct console hardware upgrades that would be hitting the market in consecutive years (Xbox One S this year, Scorpio in 2017). Years from now, this may be the grand moment that we all point to as forever changing the evolution of the console business. Sony, too, is preparing a slight upgrade to PS4 with the still-to-be-unveiled Neo, and while it won’t be as powerful as Scorpio, it’s not a stretch to assume that Sony is already working on the next, more powerful PlayStation iteration as well. We can all kiss the five or six-year console cycle goodbye now, but the publishers we spoke to at E3 all believe that this is ultimately great for the console industry and the players.

The most important aspect of all of this is the way in which Sony and Microsoft intend to handle their respective audiences. Both companies have already said that players of the older hardware will not be left behind. The ecosystem will carry on, and that to EA global publishing chief Laura Miele is a very good thing, indeed.

“I perceive it as upgrades to the hardware that will actually extend the cycle,” she told me. “I actually see it more as an incredibly positive evolution of the business strategy for players and for our industry and definitely for EA. The idea that we would potentially not have an end of cycle and a beginning of cycle I think is a positive place for our industry to be and for all of the commercial partners as well as players.

“I have an 11-year-old son who plays a lot of games. We changed consoles and there are games and game communities that he has to leave behind and go to a different one. So he plays on multiple platforms depending on what friends he’s playing with and which game he’s going to play. So the idea that you have a more streamlined thoroughfare transition I think is a big win… things like backwards compatibility and the evolution,” she continued.

“So it’s not my perception that the hardware manufacturers are going to be forcing upgrades. I really see that they’re trying to hold on and bring players along. If players want to upgrade, they can. There will be benefit to that. But it’s not going to be punitive if they hold on to the older hardware… So we’re thrilled with these announcements. We’re thrilled with the evolution. We’re thrilled with what Sony’s doing, what Microsoft’s doing and we think it’s phenomenal. I think that is good for players. It’ll be great for us as a publisher about how they’re treating it.”

Ubisoft’s head of EMEA Alain Corre is a fan of the faster upgrade approach as well. “The beautiful thing is it will not split the communities. And I think it’s important that when you’ve been playing a game for a lot of years and invested a lot of time that you can carry on without having to start over completely again. I think with the evolution of technology it’s better than what we had to do before, doing a game for next-gen and a different game from scratch for the former hardware. Now we can take the best of the next console but still have super good quality for the current console, without breaking the community up. We are quite big fans of this approach,” he said.

Corre also noted that Ubisoft loves to jump on board new technologies early (as it’s done for Wii, Kinect, VR and now Nintendo NX with Just Dance), and its studios enjoy being able to work with the newest tech out there. Not only that, but the new consoles often afford publishers the opportunity to build out new IP like Steep, he said.

“Each time there’s a new machine with more memory then our creators are able to bring something new and fresh and innovate, and that’s exciting for our fans who always want to be surprised. So the fact that Microsoft announced that they want to move forward to push the boundaries of technology again is fantastic news. Our creators want to go to the limit of technology to make the best games they can… so the games will be better in the years to come which is fantastic for this industry. And at Ubisoft, it’s also in our DNA to be [supportive] early on with new technology. We like taking some risks in that respect… We believe in new technology and breaking the frontiers and potentially attracting new fans and gamers into our ecosystem and into our brands,” Corre continued.

Take-Two boss Strauss Zelnick pointed out the continuity in the communities as well. “The ecosystems aren’t shifting as much. We essentially have a common development architecture now that’s essentially a PC architecture,” he said. And if the console market truly is entering an almost smartphone like upgrade curve, “It would be very good for us obviously. To have a landscape…where you put a game out and you don’t worry about it,” he commented, “the same way that when you make a television show you don’t ask yourself ‘what monitor is this going to play on?’ It could play on a 1964 color television or it could play on a brand-new 4K television, but you’re still going to make a good television show.

“So we will for sure get there as an industry. We will get to the point where the hardware becomes a backdrop. And sure, constantly more powerful hardware gives us an opportunity but it would be great to get to a place where we don’t have a sine curve anymore, and I do see the sine curve flattening but I’m not sure I agree it’s going away yet… That doesn’t change any of our activities; we still have to make the very best products in the market and we have to push technology to its absolute limit to do so.”

Courtesy-GI.biz

 

AMD’s Radeon RX 480 Goes On Sale Today For The Cheap

June 30, 2016 by Michael  
Filed under Computing

With its aggressive pricing move on the Radeon RX 480, AMD has no choice but to continue the same thing with the lower positioned Radeon RX 470, which should end up with a US $149 MSRP for the 4GB version.

Rumored to be based on 14nm Polaris architecture Ellesmere Pro GPU with 32 Compute Units and 2048 Stream Processors, which is a significant drop from 2304 Stream Processors on the Ellesmere XT-based Radeon RX 480, the Radeon RX 470 should end up with US $149 MSRP for the 4GB and US $179 MSRP for the 8GB version.

According to recent rumors, the GDDR5 memory on the RX 470 should be also clocked at lower 7,000MHz, pushing 224GB/s of total memory bandwidth while the TDP should end up at 110W.

With these specifications, the Radeon RX 470 could end up faster than Radeon R9 380X, which means it could be a perfect choice for 1080p gamers.

AMD Radeon RX 480 should launch today June 29th, while the rest of the lineup, including Radeon RX 470 and RX 460 could come a bit later.

Courtesy-Fud

 

Added Benefit Of Cloud Computing: Less Energy Usage

June 29, 2016 by mphillips  
Filed under Computing

Just a decade ago, power usage at data centers was growing at an unsustainable rate, soaring 24% from 2005 to 2010. But a shift to virtualization, cloud computing and improved data center management is reducing energy demand.

According to a new study, data center energy use is expected to increase just 4% from 2014 to 2020, despite growing demand for computing resources.

Total data center electricity usage in the U.S., which includes powering servers, storage, networking and the infrastructure to support it, was at 70 billion kWh (kilowatt hours) in 2014, representing 1.8% of total U.S. electricity consumption.

Based on current trends, data centers are expected to consume approximately 73 billion kWh in 2020, becoming nearly flat over the next four years. “Growth in data center energy consumption has slowed drastically since the previous decade,” according to a study by the U.S. Department of Energy’s Lawrence Berkeley National Laboratory. “However, demand for computations and the amount of productivity performed by data centers continues to rise at substantial rates.”

Improved efficiency is most evident in the growth rate of physical servers.

From 2000 to 2005, server shipments increased 15% each year, resulting in a near doubling of servers in data centers. From 2005 to 2010, the annual shipment increases fell to 5%, but some of this decline was due to the recession. Nonetheless, this server growth rate is now at 3%, a pace that is expected to continue through 2020.

The reduced server growth rate is a result of the increase in server efficiency, better utilization thanks to virtualization, and a shift to cloud computing. This includes concentration of workloads in so-called “hyperscale” data centers, defined as 400,000 square feet in size and above.

Energy use by data centers may also decline if more work is shifted to hyperscale centers, and best practices continue to win adoption.

 

 

 

Airbnb Sues San Francisco, Claims Free Speech Violated

June 29, 2016 by mphillips  
Filed under Uncategorized

Airbnb has filed suit against the city of San Francisco, claiming that a recent ordinance which requires hosts to register with the city violates the online home-sharing company’s free speech rights.

A San Francisco law slated to take effect next month requires companies like Airbnb to verify that rentals have a valid registration number issued by the city. The ordinance would impose on the company fines of up to $1,000 per day for each offense.

Airbnb’s lawsuit claims that the ordinance violates federal communications laws and asks a judge to block it. The law cannot fix San Francisco’s housing crunch, the company said in a blog post.

“This legislation ignores the reality that the system is not working and this new approach will harm thousands of everyday San Francisco residents who depend on Airbnb,” the company said.

Matt Dorsey, a spokesman for the San Francisco city attorney’s office, said nothing in the ordinance punishes Airbnb for their hosts’ content. Rather, the ordinance is intended to facilitate tax collection, he said.

“In fact, it’s not regulating user content at all – it’s regulating the business activity of the hosting platform itself,” Dorsey said in an email.

The case in U.S. District Court, Northern District of California is Airbnb Inc vs City and County of San Francisco, 16-03615.

 

Is Intel Going To Dump McAfee

June 29, 2016 by Michael  
Filed under Uncategorized

Intel has run out of ideas about what it is going to do with it its security business and is apparently planning to flog it off.

Five years ago Intel bought McAfee for $7.7bn acquisition. Two years ago it re-branded it as Intel Security. There was talk about chip based security and how important this would be as the world moved to the Internet of Things.

Now the company has discussed the future of Intel Security with bankers, including potentially the outfit. The semiconductor company has been shifting its focus to higher-growth areas, such as chips for data center machines and Internet-connected devices, as the personal-computer market has declined.

The security sector has seen a lot of interest from private equity buyers. Symantec said earlier this month it was acquiring Web security provider Blue Coat for $4.65 billion in cash, in a deal that will see Silver Lake, an investor in Symantec, enhancing its investment in the merged company, and Bain Capital, majority shareholder in Blue Coat, reinvesting $750 million in the business through convertible notes.

However Intel’s move into the Internet of Things does make it difficult for it to exit the security business completely. In fact some analysts think it will only sell of part of the business and keep some key bits for itself.

Courtesy-Fud

 

Can Licensing Save Blackberry?

June 29, 2016 by Michael  
Filed under Mobile

Blackberry is hoping to pull its nadgers out of the fire by licencing its mobile software to other outfits.

However BlackBerry CEO John Chen had to admit that there has been zero revenue from the endeavour, which he started off last month.

Chen said he’s been in discussions with some phone manufacturers and set-top box operators who have expressed interest and “anything was possible.”

He added he’s not opposed to licensing BlackBerry’s security software either if the right deal comes along. He expects BlackBerry to break even or record a slight profit in its new mobility solutions segment, which includes device and software licensing sales, during the third quarter that in November.

Making the segment profitable this fiscal year is one of the company’s top goals, Chen said.

It’s too soon to project how much revenue the software-licensing venture can garner, Chen said, so to achieve the goal by the end of November, BlackBerry will have to ensure its devices are on track for profitability as well.

The company’s newest phone, the Android-powered Priv, has moved slower than hoped. In fact it moved slower than a student who had been up all night playing counterstrike.

During BlackBerry’s first quarter — the second full quarter to include Priv sales — the smartphone segment generated US$152 million of revenue and had a US$21-million operating loss. Chen promised that loss would be significantly smaller in the next quarter.

The company sold roughly 500,000 devices at an average price of $290 each, he said, which is about 100,000 smartphones fewer than the previous quarter and about 200,000 fewer than two quarters earlier. BlackBerry previously said the company needs to sell about three million phones at an average of $300 each to break even, though Chen indicated that may change as the software licensing business starts to contribute to revenue.

Chen said the Priv has proved unaffordable to most people, except for top-level executives.

The company plans to release two mid-range, Android-powered phones before its current fiscal year ends Feb. 28, 2017, he said. More information on the devices is expected next month, but Chen said one will only have a touch screen rather than BlackBerry’s traditional keyboard.

The company is trying to reach the market in more innovative ways. It’s currently hosting a pop-up shop in New York City, and Chen said he’d consider more of them around the world if the trial is successful.

“I really, really believe that we could make money … out of our device business,” he said during a conference call with analysts Thursday morning.

Chen previously indicated the company will stop making smartphones if the device business remains unprofitable. While he said he doesn’t believe that will be necessary, the software licensing plan could help make the transition smoother if the time comes.

BlackBerry  reported a $670 million net loss in the first quarter of its 2017 financial year, but said its recovery plan for the year remains on track.

Revenue was below analyst estimates at $400 million under generally accepted accounting principles, or US$424 million with certain adjustments.

Courtesy-Fud

 

EU, USA Reach Deal On Data-transfer Pact

June 28, 2016 by mphillips  
Filed under Around The Net

The U.S. and the European Union have reportedly come to an agreement on the language of a key data transfer pact, including limits on U.S. surveillance.

The revamped EU-U.S. Privacy Shield was sent to EU member states overnight, according to a report from Reuters. Privacy Shield would govern how multinational companies handle the private data of EU residents.

Member states are expected to vote on the proposal in July, unnamed sources told Reuters. Representatives of the EU and the U.S. Department of Commerce didn’t immediately respond to requests for comments on the reported deal.

Critics of Privacy Shield, including European privacy regulators, have said the deal is too complex and fails to reflect key privacy principles.

The new language sent to member states includes stricter data-handling rules for companies holding Europeans’ information, Reuters reported. The new proposal also has the U.S. government explaining the conditions when they would collect data in bulk, according to the report.

Negotiators on both sides of the Atlantic have been rushing to craft a new trans-Atlantic data transfer agreement since the Court of Justice of the European Union struck down Safe Harbor, the previous transfer pact, last October.

The court ruled that Safe Harbor didn’t adequately protect European citizens’ personal information from massive and indiscriminate surveillance by U.S. authorities. Safe Harbor had been in place since 2000.

 

 

Intel May Get Rid Of Its Security Business

June 28, 2016 by mphillips  
Filed under Around The Net

Intel is contemplating shedding  its security business as the company tries to focus on delivering chips for cloud computing and connected devices, according to a news report.

The Intel Security business came largely from the company’s acquisition for $7.7 billion of security software company McAfee. Intel announced plans to bake some of the security technology into its chips to ensure higher security for its customers.

With the surge in cyberthreats, providing protection to the variety of Internet-connected devices — such as PCs, mobile devices, medical gear and cars — requires a fundamentally new approach involving software, hardware and services, the company said in February 2011, when announcing the completion of the McAfee acquisition.

Intel has been talking to bankers about the future of its cybersecurity business for a deal that would be one of the largest in the sector, reported The Financial Times, citing people close to the discussions. It said a group of private equity firms may join together to buy the security business if it is sold at the same price or higher than what Intel paid for it.

“I could see them selling a piece of the service, but not all security capabilities,” said Patrick Moorhead, president and principal analyst at Moor Insights & Strategy.

“Intel has a decent security play right now and security is paramount to the future of IoT,” Moorhead said. “Hardware-based security is vital to the future of computing.”

Intel is declining to comment on the report, a company spokeswoman wrote in an email.

 

 

IBM Going After Chinese Supercomputer

June 28, 2016 by Michael  
Filed under Computing

The US is clearly embarrassed the Chinese Sunway TiahuLight system is leading the supercomputer arms race. Now the Department of Energy’s (DOE) Oak Ridge National Laboratory has announced that is having a new IBM system, named Summit, delivered in early 2018 that will now be capable of 200 peak petaflops.

That would make it almost twice as fast as TaihuLight.  The Summit will be based around IBM Power9 and Nvidia Volta GPUs. Summit use only about 3,400 nodes. Each node will have “over half a terabyte” of coherent memory (HBM + DDR4), plus 800GB of non-volatile RAM that serves as a burst buffer or extended memory.

IBM are not the only ones worried about the Chinese getting ahead on speed. Cray announced this week  its Cray XC systems are now available with the latest Intel Xeon Phi (Knights Landing) processors.

The company said the new XC systems, which feature an adaptive design that supports multiple processor and storage technologies in the same architecture, deliver a 100 per cent performance boost over prior generations. Cray also unveiled the Sonexion 3000 Lustre storage system, which can deliver speeds of almost 100GB/sec in a single rack.  These should be rather good at number crunching too.

Courtesy-Fud

 

Intel And Nokia Joining Forces

June 28, 2016 by Michael  
Filed under Around The Net

Nokia is teaming up with Intel to make its carrier-grade AirFrame Data Center Solution hardware available for an Open Platform Network Functions Virtualization (OPNFV) Lab.

Basically this means that the hardware can be used by the OPNFV collaborative open source community to accelerate the delivery of cloud-enabled networks and applications.

Nokia said the OPNFV Lab will be a testbed for NFV developers and accelerates the introduction of commercial open source NFV products and services. Developers can test carrier-grade NFV applications for performance and availability.

Nokia is making its AirFrame Data Center Solution available as a public OPNFV Lab with the support of Intel, which is providing Intel Xeon processors and solid state drives to give communications service providers the advantage of testing OPNFV projects on the latest and greatest server and storage technologies.

The Nokia AirFrame Data Center Solution is 5G-ready and Nokia said it was the first to combine the benefits of cloud computing technologies to meet the stringent requirements of the telco world. It’s capable of delivering ultra-low latency and supporting the kinds of massive data processing requirements that will be required in 5G.

Morgan Richomme, NFV network architect for Innovative Services at Orange Labs, OPNFV Functest PTL, in a release. “NFV interoperability testing is challenging, so the more labs we have, the better it will be collectively for the industry.”

AT&T has officially added Nokia to its list of 5G lab partners working to define 5G features and capabilities. It’s also working with Intel and Ericsson.

Courtesy-Fud

 

Apple Pulls The Plug On Thunderbolt Display

June 27, 2016 by mphillips  
Filed under Consumer Electronics

Apple announced that is will discontinue its Thunderbolt Display, the high-resolution external display that users of the MacBook and other Macs could use to get a better picture and work with more apps.

The company said Thursday that the 27-inch widescreen display with LED backlight technology will be available on Apple’s online store, in Apple retail stores and from authorized resellers while supplies last.

The Thunderbolt Display currently retails on the Apple online store at $999. It has a 2560 x 1440 resolution.

It isn’t clear whether Apple plans to follow with newer versions that use 5K resolution displays at 5120 by 2880 pixels, which is the display technology Apple uses on its high-end iMac. There was speculation earlier that a new version would be announced at the company’s Worldwide Developers Conference this month.

An Apple spokeswoman declined to comment on whether Apple planned to offer a refresh to the display.

Apple said in an emailed statement that “there are a number of great third-party options available for Mac users.”

 

 

Apple Begins Testing Of Safari 10 Browser

June 27, 2016 by mphillips  
Filed under Around The Net

Apple has begun testing Safari 10 with developers running the 2014 and 2015 editions of macOS, gearing up for a fall release of the updated browser to users of Yosemite and El Capitan.

Safari 10 was introduced earlier this month as part of macOS Sierra, this year’s operating system upgrade.

Apple typically supports its newest browser on three editions of macOS: The latest version and its two predecessors. The now-current Safari 9, for example, receives updates, including security patches, on last year’s El Capitan, 2014′s Yosemite and 2013′s Mavericks.

Safari 10 will be supported on Sierra, El Capitan and Yosemite. Meanwhile, Mavericks will remain on Safari 9.

The Safari 10 preview is currently available only to registered Apple developers, who pay $99 annually for access to early builds, development tools and documentation.

The general public will get its first look at Safari 10 next month after Apple opens up its broader-based public beta program for Sierra. Those who have signed on to the beta preview will also be able to download preliminary versions of Safari 10 for El Capitan and Yosemite, running the preview browser but sticking with their older, more stable operating systems.

Some of Safari 10′s signature features will be available only within macOS Sierra, including web-based Apple Pay — where payment is authorized with an iPhone or Apple Watch — but others will be supported by older versions of the operating system. Among the most notable are the new ability for developers to distribute and sell Safari add-ons in the Mac App Store, and easy portability of iOS content blockers to macOS.

If Apple replicates last year’s beta schedule, it will release the first public preview of macOS Sierra and Safari 10 around July 14.

 

 

Interest Grows In Collaborative Robots

June 27, 2016 by mphillips  
Filed under Around The Net

Robots that work as assistants in unison with people are set to upend the world of industrial robotics by putting automation within reach of many small and medium-sized companies for the first time, according to industry experts.

Collaborative robots, or “cobots”, tend to be inexpensive, easy to use and safe to be around. They can easily be adapted to new tasks, making them well-suited to small-batch manufacturing and ever-shortening product cycles.

Cobots can typically lift loads of up 10 kilograms (22 lb) and can be small enough to put on top of a workbench. They can help with repetitive tasks like picking and placing, packaging or gluing and welding.

Some can repeat a task after being guided once through the process by a worker and recording it. The price of a cobot can be as little as $10,000, although typically they cost two to three times that.

The global cobot market is set to grow from $116 million last year to $11.5 billion by 2025, capital goods analysts at Barclays estimate. That would be roughly equal to the size of the entire industrial robotics market today.

“By 2020 it will be a game-changer,” said Stefan Lampa, head of robotics of Germany’s Kuka, during a panel discussion organized by the International Federation of Robotics (IFR) at the Automatica trade fair in Munich.

Growth in industrial robot unit sales slowed to 12 percent last year from 29 percent in 2014, the IFR said on Wednesday, weighed by a sharp fall in top buyer China.

The world’s top industrial robot makers – Japan’s Fanuc and Yaskawa, Swiss ABB and Kuka – all have collaborative robots on the market, although sales are not yet significant for them.

But the market leader and pioneer is Denmark’s Universal Robots, a start-up that sold its first cobot in 2009 and was acquired by U.S. automatic test equipment maker Teradyne for $285 million last year.

 

 

AMD Goes 32 Cores Zeplin

June 27, 2016 by Michael  
Filed under Computing

A few months back Nick wrote about AMD Zen processor  found in a Linux Kernel Mailing List confirming that Zeppelin had  support for eight bundles of four cores on a single chip, or 32 physical processing cores.

This tied in with a story written in August of 2015 about a MCM Multi Chip module that featured a Zeppelin core, a super-fast 100GB/s interconnection via 4 GMI links and Greenland (Vega) high performance GPU with 4+ TFlops of performance. This APU will still happen, it will just be a bit later – the end of 2017.

Now we have a few more details about Zeppelin cluster and this is proving to be another “Fudzilla told you so” moment.  Apparently you can put up to four Zeppelin CPU clusters on a one chip and make a 32 core chip. This will be connected via coherent interconnect (coherent data fabric).

Each Zeppelin module has eight Zen cores and each Zen core has 512 KB of L2 cache. Four Zen cores share 8MB or L3 cache making the total amount of L3 cache per Zeppelin cluster 16 MB.

Each Zeppelin cluster will have PCIe Gen 3, SATA 3, and a 10GbE network connection. A server version of the chip has the server controller hub, DDR4 memory controller and AMD secure processors.

AMD will have at least three pin compatible versions of the next generation Opteron using Zeppelin cluster of Zen cores. There will be a 8 core versions with single Zeppelin cluster, dual Zeppelin cluster version and a quad Zeppelin version, that one that we have called Naples which will have 64MB L3 cache. All this sounds rather a lot.

We are expecting to see Zen-based Opterons in eight, sixteen and thirty two core versions for servers in 2017.

Courtesy-Fud