Subscribe to:

Subscribe to :: TheGuruReview.net ::

BlackBerry Says Device Business Is Top Priority

June 24, 2016 by mphillips  
Filed under Mobile

BlackBerry Ltd’s top priority this year is to make its devices business turn a profit, its chief executive said, even as it weighs the future of its hardware operation.

“The device business must be profitable, because we don’t want to run a business that drags onto the bottom line,” Chief Executive John Chen told investors at the company’s annual meeting. “We’ve got to get there this year.”

Chen has previously said a decision would be made by September on the future of the unit, which has suffered a sustained drop in sales in recent quarters.

But at the meeting, attended by around 100 people, he said he sees better opportunity in providing services that enable increasingly commoditized hardware to do more.

“I don’t personally believe handsets will be the future of any company,” he said.

BlackBerry, once the smartphone market leader before being displaced by Apple Inc and competitors run on Alphabet Inc’s Android platform, has worked to reposition itself as a software and service provider focused on device management for large organizations.

In its presentation to investors, the company said it expects the broader market for types of software it is producing to expand to $17.6 billion by 2019, from $525 million in 2012 and below $4 billion in 2015, powered by growth in medical, legal, financial and automotive industries.

But some of those in attendance were skeptical about BlackBerry’s ability to deliver on its strategic pivot.

“The first word that comes to mind is lackluster,” said one shareholder at the meeting who declined to give his name. “Time is running out.”

Chen reiterated that BlackBerry wants to grow its software revenue by 30 percent in this fiscal year, which he estimated would be double overall market growth, and to notch positive free cash flow.

BlackBerry is due to report first quarter results on Thursday.

Chen took up the CEO role in 2013 with a reputation as a turnaround artist. But the company’s stock has only risen modestly since then, with many investors waiting for signs the now-smaller company will be able to carve out new opportunities.

“I appreciate the strategy,” said Ken Tota, an investor in BlackBerry’s biggest shareholder, Fairfax Financial Holdings Ltd. He said he was optimistic a renewed focus on security could help reinvigorate BlackBerry over the next five years.

“It’s a niche, but it’s a worldwide niche,” he said.

 

 

New ‘Godless’ Malware Infecting Android Phones

June 23, 2016 by mphillips  
Filed under Mobile

Android phone owners take note: a new type of malware has been found in legitimate-looking apps that can “root” your phone and secretly install unwanted programs.

The malware, dubbed Godless, has been found lurking on app stores including Google Play, and it targets devices running Android 5.1 (Lollipop) and earlier, which accounts for more than 90 percent of Android devices, Trend Micro said Tuesday in a blog post.

Godless hides inside an app and uses exploits to try to root the OS on your phone. This basically creates admin access to a device, allowing unauthorized apps to be installed.

Godless contains various exploits to ensure it can root a device, and it can even install spyware, Trend Micro said.

A newer variant can also bypass security checks at app stores like Google Play. Once the malware has finished its rooting, it can be tricky to uninstall, the security firm said.

Trend Micro said it found various apps in Google Play that contain the malicious code.

“The malicious apps we’ve seen that have this new remote routine range from utility apps like flashlights and Wi-Fi apps, to copies of popular game,” the company said.

Some apps are clean but have a corresponding malicious version that shares the same developer certificate. The danger there is that users install the clean app but are then upgraded to the malicious version without them knowing.

So far, Trend says it has seen 850,000 affected devices, with almost half in India and more in other southeast Asian countries. Less than 2 percent were in the U.S.

 

 

IBM Funds Researchers Who Create KiloCore Processor

June 22, 2016 by Michael  
Filed under Computing

Researchers at the University of California, Davis, Department of Electrical and Computer Engineering have developed 1000-core processor which will eventually be put onto the commercial market.

The team, from t developed the energy-efficient 621 million transistor “KiloCore” chip so that it could manage 1.78 trillion instructions per second and since the project has IBM’s backing it could end up in the shops soon.

Team leader Bevan Baas, professor of electrical and computer engineering said that it could be the world’s first 1,000-processor chip and it is the highest clock-rate processor ever designed in a university.

While other multiple-processor chips have been created, none exceed about 300 processors. Most of those were created for research purposes and few are sold commercially. IBM, using its 32 nm CMOS technology, fabricated the KiloCore chip and could make a production run if required.

Because each processor is independently clocked, it can shut itself down to further save energy when not needed, said graduate student Brent Bohnenstiehl, who developed the principal architecture. Cores operate at an average maximum clock frequency of 1.78 GHz, and they transfer data directly to each other rather than using a pooled memory area that can become a bottleneck for data.

The 1,000 processors can execute 115 billion instructions per second while dissipating only 0.7 Watts which mean it can be powered by a single AA battery. The KiloCore chip executes instructions more than 100 times more efficiently than a modern laptop processor.

The processor is already adapted for wireless coding/decoding, video processing, encryption, and others involving large amounts of parallel data such as scientific data applications and datacentre work.

Courtesy-Fud

 

Toyota To Build Artificial Intelligence-based Driving Systems

June 21, 2016 by mphillips  
Filed under Around The Net

Toyota Motor Corp is focusing developing in the next five years driver assistance systems that integrate artificial intelligence (AI) to improve vehicle safety, the head of its advanced research division said.

Gill Pratt, CEO of recently set up Toyota Research Institute (TRI), the Japanese automaker’s research and development company that focuses on AI, said it aims to improve car safety by enabling vehicles to anticipate and avoid potential accident situations.

Toyota has said the institute will spend $1 billion over the next five years, as competition to develop self-driving cars intensifies.

Earlier this month, home rival Honda Motor Co said it was setting up a new research body which would focus on artificial intelligence, joining other global automakers which are investing in robotics research, including Ford and Volkswagen AG.

“Some of the things that are in car safety, which is a near-term priority, I’m very confident that we will have some advances come out during the next five years,” Pratt told reporters late last week in comments embargoed for Monday.

The concept of allowing vehicles to think, act and take some control from drivers to perform evasive maneuvers forms a key platform of Toyota’s efforts to produce a car which can drive automatically on highways by the 2020 Tokyo Olympics.

While currently driver assistance systems largely use image sensors to avoid obstacles including vehicles and pedestrians within the car’s lane, Pratt said TRI was looking at AI solutions to enable “the car to be evasive beyond the one lane”.

“The intelligence of the car would figure out a plan for evasive action … Essentially (it would) be like a guardian angel, pushing on the accelerators, pushing on the steering wheel, pushing on the brake in parallel with you.”

As Japanese automakers race against technology companies to develop automated vehicles, they are also grappling with a rapidly graying society, which puts future demand for private vehicle ownership at risk.

Pratt said he saw the possibility that Toyota may one day become a maker of robots to help the elderly.

Asked of the potential for Toyota to produce robots for use in the home, he said: “That’s part of what we’re exploring at TRI.”

Pratt declined to comment on a media report earlier this month that Toyota is in talks with Google’s parent company Alphabet to acquire Boston Dynamics and Schafts, both of which are robotics divisions of the technology company.

 

Was Last Weeks E3 A Success?

June 20, 2016 by Michael  
Filed under Gaming

E3 2016 has officially come to a close, and despite the fact that Activision and EA were absent from the show floor, my experience of the show was that it was actually quite vibrant and filled with plenty of intricate booth displays and compelling new games to play. The same cannot be said for the ESA’s first ever public satellite event, E3 Live, which took place next door at the LA Live complex. The ESA managed to give away 20,000 tickets in the first 24 hours after announcing the show in late May. But as the saying goes, you get what you pay for…

The fact that it was a free event, however, does not excuse just how poor this show really was. Fans were promised by ESA head Mike Gallagher in the show’s initial announcement “the chance to test-drive exciting new games, interact with some of their favorite developers, and be among the first in the world to enjoy groundbreaking game experiences.”

I spent maybe an hour there, and when I first arrived, I genuinely questioned whether I was in the right place. But to my disbelief, the small area (maybe the size of two tennis courts) was just filled with a few tents, barely any games, and a bunch of merchandise (t-shirts and the like) being marketed to attendees. The fans I spoke with felt like they had been duped. At least they didn’t pay for their tickets…

“When we found out it was the first public event, we thought, ‘Cool we can finally go to something E3 related’ because we don’t work for any of the companies and we’re not exhibitors, and I was excited for that but then we got here and we were like ‘Uh oh, is this it?’ So we got worried and we’re a little bit upset,” he continued. Malcolm added that he thought it was going to be in one of the buildings right in the middle of the LA Live complex, rather than a siphoned off section outside with tents.

As I walked around, it was the same story from attendees. Jose, who came with his son, felt similarly to Malcolm. “It’s not that big. I expected a lot of demos, but they only had the Lego Dimensions demo. I expected something bigger where we could play some of the big, upcoming titles. All it is is some demo area with Lego and some VR stuff,” he told me.

When I asked him if he got what he thought would be an E3 experience, he continued, “Not even close, this is really disappointing. It’s really small and it’s just here. I expected more, at least to play some more. And the VR, I’m not even interested in VR. Me and my son have an Xbox One and we wanted to play Battlefield 1 or Titanfall 2 and we didn’t get that opportunity. I was like c’mon man, I didn’t come here to buy stuff. I came here to enjoy games.”

By cobbling together such a poor experience for gamers, while 50,000 people enjoy the real E3 next door, organizers risk turning off the very audience that they should be welcoming into the show with open arms. As the major publishers told me this week, E3 is in a transitional period and needs to put players first. That’s why EA ultimately hosted its own event, EA Play. “We’re hoping the industry will shift towards players. This is where everything begins and ends for all of us,” said EA global publishing chief Laura Miele.

It seems like a no-brainer to start inviting the public, and that’s what we all thought was happening with E3 Live, but in reality they were invited to an atmosphere and an “experience” – one that barely contained games. The good news, as the quickly sold out E3 Live tickets indicated, is that there is a big demand for a public event. And it shouldn’t be very complicated to pull off. If the ESA sells tickets, rather than giving them away, they can generate a rather healthy revenue stream. Give fans an opportunity to check out the games for a couple days and let the real industry conduct its business on a separate 2-3 days. That way, the ESA will be serving both constituents and E3 will get a healthy boost. And beyond that, real professionals won’t have to worry anymore about getting shoved or trampled, which nearly happened to me when a legion of frenzied gamers literally all started running into West Hall as the show floor opened at 10AM. Many of these people are clearly not qualified and yet E3 allows them to register. It’s time to make E3 more public and more professional. It’s your move ESA.

We asked the ESA to provide comment on the reception to E3 Live but have not received a response. We’ll update this story if we get a statement.

Courtesy-GI.biz

 

Will AMD’s Naples Processor Have 32 Cores?

June 16, 2016 by Michael  
Filed under Computing

AMD’s Zen chip will have as much as 32 cores, 64 threads and more L3 cache than you can poke a stick at.

Codenamed Naples, the chip uses the Zen architecture. Each Zen core has its own dedicated 512kb cache. A cluster [shurely that should be cloister.ed] of Zen cores shares a 8MB L3 cache which makes the total amount of L3 shared cache 64MB. This is a big chip and of course there will be a 16 core variant.

This will be a 14nm FinFET product manufactured in GlobalFoundries and supporting the X86 instruction set. Naples has eight independent memory channels and up to 128 lanes of gen 3 PCIe.  This makes it suitable for fast NVMO memory controllers and drives. Naples also support up to 32 SATA or NVME drives.

If you like the fast network interface, Naples supports 16x10GbE and the controller is integrated, probably in the chipset. Naples is using SP3 LGA server socket.

The first Zen based server / enterprise products will range between a modest 35W TDP to a maximum of 180W TDP for the fastest ones.

There will be dual, quad, sixteen and thirty-two core server versions of Zen, arriving at different times. Most of them will launch in 2017 with a possibility of very late 2016 introduction.

It is another one of those Fudzilla told you so moments. We have already revealed a few Zen based products last year. The Zen chip with Greenland / Vega HBM2 powered GPU with HSA support will come too, but much later.

Lisa Su, AMD’s CEO  told Fudzilla that the desktop version will come first, followed by server, notebook and finally embedded. If that 40 percent IPC happens to be across more than just a single task, AMD has a chance of giving Intel a run for its money.

 

Courtesy-Fud

 

VMware Launches TrustPoint, Aims To Enhance Network Security

June 15, 2016 by mphillips  
Filed under Around The Net

VMware is aiming to help businesses get a better handle on the security of the computers their employees use. The new TrustPoint product the company announced Monday uses software to make it possible to track and manage computers easily and quickly, without taking up a whole bunch of data.

The software allows companies to detect what devices are on their networks, along with which ones are being managed by IT. That helps businesses understand if they have machines operating outside the reach of their security systems, which could be a problem for protecting company data.

In addition, businesses will also be able to use TrustPoint to handle operating system imaging with VMware’s technology, so it’s easier for them to patch systems that are managed with TrustPoint.

It’s all part of VMware’s ongoing push into the enterprise endpoint management market, which has proved increasingly popular as employees bring their own devices to work and security threats have intensified.

TrustPoint is powered by technology from Tanium. In addition to detecting unmanaged devices, TrustPoint can block those devices from connecting to company networks, so they don’t get access to key data.

Computers running TrustPoint communicate with other devices near them that are running the software, so that it’s possible for a group of computers to all get a software update by having pieces of it pushed to several different devices using TrustPoint. Once the pieces of the update have been downloaded, TrustPoint can coordinate the transfer of information between computers so that each one gets a complete update.

VMware has said that TrustPoint is particularly well-suited to rolling out Windows 10, which more and more companies are gearing up for. As part of that transition, the system can upgrade a device with a consumer Windows 10 license to Windows 10 Enterprise, so people can use other administration tools to manage it, too.

 

 

 

 

Hewlett Packard Opens Up “The Machine” To Developers

June 15, 2016 by Michael  
Filed under Computing

Hewlett Packard Enterprise (HPE) has opened up The Machine, its next-generation project to reinvent the computer that doesn’t exist yet, to developers.

The Machine initiative, which the firm claims will enable firms to analyse a trillion CRM records in a split second and  an entire data center out of one box, was first announced at the HPEs’ Discover event two years ago, and sets out to reinvent the architecture of next-generation computers based on the concept of memory-driven computing using processors closely coupled with non-volatile memory (NVM).

The project is still at a relatively early stage, but it is clear that The Machine will be sufficiently different from current systems as to require an entirely new software ecosystem to function, which is why HPE is opening its doors to developers at this year’s Discover event in Las Vegas.

“Given the fundamental shift in how The Machine will work, the initiative aims to start familiarising developers with its new programming model as well as invite them to help develop the software itself,” the firm said in a statement.

“This is an uncommonly early opportunity for developers to help build components of The Machine from the ground up, since much of the software is in the starting phases.”

The tools initially available include four key code modules that have been created to enable developer communities to evaluate how The Machine is likely to have an effect in applications such as machine learning and graph analytics.

The modules are: a novel database engine that speeds up applications by taking advantage of a large number of CPU cores and NVM; a fault-tolerant programming model for NVM that adapts existing multi-threaded code to take advantage of persistent memory; a Fabric Attached Memory emulator designed to allow users to explore the new architecture; and a DRAM-based performance emulator that uses existing hardware to emulate the latency and bandwidth characteristics of future NVM technology.

HPE said that it will continue to update this code and release additional contributions. Some of these will address changes to operating systems, including Linux, that will be required to enable them to run on The Machine.

HPE also intends to produce sample applications that demonstrate how The Machine can significantly improve application scale and performance.

Courtesy-TheInq

 

Mozilla Establishes Fund To Audit Open-source Code

June 14, 2016 by mphillips  
Filed under Computing

A new Mozilla fund, named Secure Open Source, will provide security audits of open-source code, following the discovery of critical security bugs like Heartbleed and Shellshock in key pieces of the software.

Mozilla has set up a $500,000 initial fund that will be used for paying professional security firms to audit project code. The foundation will also work with the people maintaining the project to support and implement fixes and manage disclosures, while also paying for the verification of the remediation to ensure that identified bugs have been fixed.

The initial fund will cover audits of  some widely-used open source libraries and programs.

The move is a recognition of the growing use of open-source software for critical applications and services by  businesses, government and educational institutions. “From Google and Microsoft to the United Nations, open source code is now tightly woven into the fabric of the software that powers the world. Indeed, much of the Internet – including the network infrastructure that supports it – runs using open source technologies,” wrote Chris Riley, Mozilla’s head of public policy.

Mozilla is hoping that the companies and governments that use open source will join it and provide additional funding for the project.

In a trial of the SOS program on three pieces of open-source software, Mozilla said it found and fixed 43 bugs, including a critical vulnerability and two issues in connection with a widely-used image file format. “These initial results confirm our investment hypothesis, and we’re excited to learn more as we open for applications,” Riley wrote.

The SOS fund “fills a critical gap in cybersecurity by creating incentives to find the bugs in open source and letting people fix them,” said James A. Lewis, senior vice president and director of the Strategic Technologies Program at the Center for Strategic and International Studies, in a statement.

The SOS is part of a larger program, called Mozilla Open Source Support, launched by Mozilla in October last year to support open source and free software development. MOSS has an annual budget of about $3 million.

To qualify for SOS funding, the software must be open source or free software, with the appropriate licenses and approvals, and must be actively maintained. Some of the other factors that will be considered are whether a project is already corporate backed, how commonly is the software used, whether it is network-facing or regularly processes untrusted data, and its importance to the continued functioning of the Internet or the Web.

 

 

 

 

Symantec Corp To Acquire Cyber Security Firm Blue Coat

June 14, 2016 by mphillips  
Filed under Around The Net

Technology security firm Symantec Corp announced that it will acquire privately held cyber security company Blue Coat for $4.65 billion in a cash deal that will ramp up Symantec’s enterprise security business.

Blue Coat helps protects companies’ web gateways from cyber attacks, a service that will complement Symantec’s existing offerings for large corporations such as email and endpoint security, Symantec executives said in an interview on Sunday.

“Blue Coat brings capabilities from the web and for network-born threats, which combined with what we already offer will provide better protection for our customers,” said Ajei Gopal, Symantec’s interim president and chief operating officer.

Symantec, which makes the Norton antivirus software, has been undergoing a transformation over the past year. It sold its data storage unit, Veritas, for $7.4 billion to a group led by Carlyle Group LP in January to gain the cash necessary turn around its core security software business.

Chief Financial Officer Thomas Seifert said Symantec had been eyeing Blue Coat for a while and wanted to wait to have the separation of the Veritas unit behind the company before it made a move to buy it. He said the deal, which is expected to close in the third quarter, would be accretive immediately.

By buying Blue Coat, 62 percent of Symantec’s revenue will now come from enterprise security, and it will be better positioned to compete with security players such as Palo Alto Networks Inc, FireEye Inc and Check Point Software Technologies Ltd. Symantec will now have $4.4 billion in combined revenue.

While it is shifting to focus more on enterprise security, Symantec has no immediate plans to sell its consumer unit, Seifert, the CFO said, adding that it is a highly profitable part of the company.

By buying Blue Coat, Symantec also solves a leadership issue, with Blue Coat CEO Greg Clark becoming Symantec’s CEO. Symantec’s previous CEO, Michael Brown, left in April after the company reported disappointing quarterly results.

Blue Coat had been preparing an initial public offering for later this summer. Its sale marks a quick turnaround for its private equity owner, Bain Capital LLC, which acquired Blue Coat Systems Inc from fellow private-equity firm Thoma Bravo LLC for $2.4 billion last year.

 

 

Is Something Bigger On The Horizon After Virtual Reality?

June 14, 2016 by Michael  
Filed under Gaming

This weeks E3 won’t be entirely dominated by VR, as some events over the past year have been; there’s too much interest in the prospect of new console hardware from all the major players and in the AAA line-up as this generation hits its stride for VR to grab all the headlines. Nonetheless, with both Rift and Vive on the market and PSVR building up to an autumn launch, VR is still likely to be the focus of a huge amount of attention and excitement at and around E3.

Part of that is because everyone is still waiting to see exactly what VR is going to be. We know the broad parameters of what the hardware is and what it can do – the earliest of early adopters even have their hands on it already – but the kind of experiences it will enable, the audiences it will reach and the way it will change the market are still totally unknown. The heightened interest in VR isn’t just because it’s exciting in its own right; it’s because it’s unknown, and because we all want to see the flashes of inspiration that will come to define the space.

One undercurrent to look out for at E3 is one that the most devoted fans of VR will be deeply unhappy with, but one which has been growing in strength and confidence in recent months. There’s a strong view among quite a few people in the industry (both in games and in the broader tech sector) that VR isn’t going to be an important sector in its own right. Rather, its importance will be as a stepping stone to the real holy grail – Augmented or Mixed Reality (AR / MR), a technology that’s a couple of years further down the line but which will, in this vision of the future, finally reach the mainstream consumer audience that VR will never attain.

The two technologies are related but, in practical usage, very different. VR removes the user from the physical world and immerses them entirely in a virtual world, taking over their visual senses entirely with closed, opaque goggles. AR, on the other hand, projects additional visual information onto transparent goggles or glasses; the user still sees the real world around them, but an AR headset adds an extra, virtual layer, ranging from something as simple as a heads-up display (Google’s ill-fated Glass was a somewhat clunky attempt at this) to something as complex as 3D objects that fit seamlessly into your reality, interacting realistically with the real objects in your field of vision. Secretive AR headset firm Magic Leap, which has raised $1.4 billion in funding but remains tight-lipped about its plans, prefers to divide the AR space into Augmented Reality (adding informational labels or heads-up display information to your vision) and Mixed Reality (which adds 3D objects that sit seamlessly alongside real objects in your environment).

The argument I’m hearing increasingly often is that while VR is exciting and interesting, it’s much too limited to ever be a mainstream consumer product – but the technology it has enabled and advanced is going to feed into the much bigger and more important AR revolution, which will change how we all interact with the world. It’s not what those who have committed huge resources to VR necessarily want to hear, but it’s a compelling argument, and one that’s worthy of consideration as we approach another week of VR hype.

The reasoning has two basis. The first is that VR isn’t going to become a mainstream consumer product any time soon, a conclusion based off a number of well-worn arguments that will be familiar to anyone who’s followed the VR resurgence and which have yet to receive a convincing rebuttal – other than an optimistic “wait and see”. The first is that VR simply doesn’t work well enough for a large enough proportion of the population for it to become a mainstream technology. Even with great frame-rate and lag-free movement tracking, some aspects of VR simply make it induce nausea and dizziness for a decent proportion of people. One theory is that it’s down to the fact that VR only emulates stereoscopic depth perception, i.e. the difference in the image perceived by each eye, and can’t emulate focal depth perception, i.e. the physical focusing of your eye on objects different distances from you; for some people the disparity between those two focusing mechanisms isn’t a problem, while for others, it makes them feel extremely sick.

Another theory is that it’s down to a proportion of the population getting nauseous from physical acceleration and movement not matching up with visual input, rather like getting motion sick in a car or bus. In fact, both of those things probably play a role; either way, the result is that a sizeable minority of people feel ill almost instantly when using VR headsets, and a rather more sizeable number feel dizzy and unwell after playing for extended periods of time. We won’t know just how sizeable the latter minority is until more people actually get a chance to play VR for extended periods; it’s worth bearing in mind once again that the actual VR experiences most people have had to date have been extremely short demos, on the order of 3 to 5 minutes long.

The second issue is simply a social one. VR is intrinsically designed around blocking out the world around you, and that limits the contexts in which it can be used. Being absorbed in a videogame while still aware of the world and the people around you is one thing; actually blocking out that world and those people is a fairly big step. In some contexts it simply won’t work at all; for others, we’re just going to have to wait and see how many consumers are actually willing to take that step on a regular basis, and your take on whether it’ll become a widespread, mainstream behaviour or not really is down to your optimism about the technology.

With AR, though, both of these problems are solved to some extent. You’re still viewing the real world, just with extra information in it, which ought to make the system far more usable even for those who experience motion sickness or nausea from VR (though I do wonder what happens regarding focal distance when some objects appear to be at a certain position in your visual field, yet exist at an entirely different focal distance from your eyes; perhaps that’s part of what Magic Leap’s secretive technology solves). Moreover, you’re not removed from the world any more than you would be when using a smartphone – you can still see and interact with the people and objects around you, while also interacting with virtual information. It may look a little bit odd in some situations, since you’ll be interacting with and looking at objects that don’t exist for other people, but that’s a far easier awkwardness to overcome than actually blocking off the entire physical world.

What’s perhaps more important than this, though, is what AR enables. VR lets us move into virtual worlds, sure; but AR will allow us to overlay vast amounts of data and virtual objects onto the real world, the world that actually matters and in which we actually live. One can think of AR as finally allowing the huge amounts of data we work with each day to break free of the confines of the screens in which they are presently trapped; both adding virtual objects to our environments, and tagging physical objects with virtual data, is a logical and perhaps inevitable evolution of the way we now work with data and communications.

While the first AR headsets will undoubtedly be a bit clunky (the narrow field of view of Microsoft’s Hololens effort being a rather off-putting example), the evolutionary path towards smaller, sleeker and more functional headsets is clear – and once they pass a tipping point of functionality, the question of “VR or AR” will be moot. VR is, at best, a technology that you dip into for entertainment for an hour here and there; AR, at its full potential, is something as transformative as PCs or smartphones, fundamentally changing how pretty much everyone interacts with technology and information on a constant, hourly, daily basis.

Of course, it’s not a zero sum game – far from it. The success of AR will probably be very good for VR in the long term; but if we see VR now as a stepping stone to the greater goal of AR, then we can imagine a future for VR itself only as a niche within AR. AR stands to replace and re imagine much of the technology we use today; VR will be one thing that AR hardware is capable of, perhaps, but one that appeals only to a select audience within the broad, almost universal adoption of AR-like technologies.

This is the vision of the future that’s being articulated more and more often by those who work most closely with these technologies – and while it won’t (and shouldn’t) dampen enthusiasm for VR in the short term, it’s worth bearing in mind that VR isn’t the end-point of technological evolution. It may, in fact, just be the starting point for something much bigger and more revolutionary – something that will impact the games and tech industries in a way even more profound than the introduction of smartphones.

Courtesy-GI.biz

 

Is AMD Outpacing nVidia In The Gaming Space?

June 14, 2016 by Michael  
Filed under Gaming

MKM analyst Ian Ing claims that AMD’s recent gaming refresh was better done than Nvidia’s.

Writing in a research report, Ing said that both GPU suppliers continue to benefit from strong core gaming plus emerging applications for new GPU processing.

However, AMD’s transition to the RX series from the R9 this month is proving smoother than Nvidia’s switch to Pascal architecture from Maxwell.

Nvidia is doing well from new GPU applications such as virtual reality and autonomous driving.

He said that pricing was holding despite a steady availability of SKUs from board manufacturers. Ing wrote that he expected a steeper ramp of RX availability compared to last year’s R9 launch, as the new architecture is lower-risk, given that HBM memory was implemented last year.

Ing upped his price target on Advanced Micro Devices stock to 5 from 4, and on Nvidia stock to 52 from 43. On the stock market today, AMD stock rose 0.9 per cent to 4.51. Nvidia climbed 0.2 per cent to 46.33.

Nvidia unveiled its new GeForce GTX 1080, using the Pascal architecture, on 27 May and while Maxwell inventory was running out, Nvidia customers were experiencing Pascal shortages.

“We would grow concerned if the present availability pattern persists in the coming weeks, which would imply supply issues/shortages,” Ing said.

Courtesy-Fud

 

Twitter Locks Accounts After Millions Of Passwords Reportedly Exposed

June 13, 2016 by mphillips  
Filed under Around The Net

Twitter said it had locked down and forced a password reset of some accounts after an unconfirmed claim of a leak of nearly 33 million usernames and passwords to the social network.

The company said the information was not obtained from a hack of its servers, and speculated that the information may have been gathered from other recent breaches, malware on victim machines that are stealing passwords for all sites, or a combination of both.

“In each of the recent password disclosures, we cross-checked the data with our records. As a result, a number of Twitter accounts were identified for extra protection. Accounts with direct password exposure were locked and require a password reset by the account owner,” Twitter’s Trust & Information Security Officer, Michael Coates said in a blog post.

Millions of users have been notified by Twitter that their accounts are at risk of being taken over, reported the Wall Street Journal on Thursday.  The company did not specify to the newspaper how many users were notified and forced to change their passwords but said that the total is in the millions.

Hacked information database LeakedSource revealed on Wednesday that it had acquired a database of 32.8 million records containing Twitter usernames, emails and passwords from a user who goes by the alias Tessa88@exploit.im., but there were questions from some experts as to the authenticity of the data.

The same user provided LeakedSource with names and passwords of alleged users of MySpace.com and VK.com.

“We have very strong evidence that Twitter was not hacked, rather the consumer was,” LeakedSource said. It pointed out that the passwords were in plain text with no encryption or hashing. Twitter said it used a password hashing function called bcrypt. The credentials are “real and valid” as out of 15 users asked, all 15 verified their passwords, LeakedSource said.

 

Is E3 Still Relevant?

June 9, 2016 by Michael  
Filed under Gaming

What is the point of E3? I ask not in a snarky tone, but one of genuine curiosity, tinged with concern. I’m simply not sure what exactly the show’s organizers, the ESA, think E3 is for any more. Over the years, what was once by far the largest date in the industry’s annual calendar has stuck out in various new directions as it sought to remain relevant, but it’s always ended up falling back to the path of least resistance – the familiar halls of the Los Angeles Convention Center, the habitual routine of allowing only those who can prove some industry affiliation to attend. For all that the show’s organizers regularly tout minor tweaks to the formula as earth-shattering innovation, E3 today is pretty much exactly the same beast as it was when I first attended 15 years ago – and by that point, the show’s format was already well-established.

There’s one major difference, though; E3 today is smaller. It now struggles to fill the convention center’s halls, and a while back ditched the Kentia Hall – which for years promised the discovery of unknown gems to anyone willing to sift through its morass of terrible ideas. Kentia refugees now fill gaps in the cavernous South Hall’s floor plan, elevated to sit alongside a roster of the industry’s greats that gets more meagre with each passing year. This year, attendees at E3 will find it hard not to notice a number of key absences. The loss of Konami’s once huge booth was inevitable given the company’s U-turn away from console publishing, but the decisions of EA and Activision to pull out of the show this year will be felt far more keenly.

Hence the question; what’s the point? Who, or what, is E3 actually meant to be for? It’s not for consumers, of course – they’re not allowed in, in theory, though the ESA has come up with various pointlessly convoluted ways of letting a handful of them in anyway. It’s for business, yet big players in the industry seem deeply dissatisfied with it. It’s not just EA and Activision, either; even the companies who are actually exhibiting on the show floor seem to have taken to viewing it as an addendum to the actually important part of the week, namely their live-broadcast press conferences. Once the realm only of platform holders, now every major publisher has their own – and if EA and Activision’s decision to go their own way entirely, leaving the E3 show floor, has no major negative consequences for them this year, you can be damned sure others will question the show’s cost-value next year.

The problem is that the world has changed and E3 has not. Once, it was the only truly global event on the calendar; back then, London had ECTS and Tokyo had TGS, but there was no question of them truly challenging E3′s dominance. The world was a very different place back then, though. It was a time before streaming high resolution video, a time before the Internet both made the world a much smaller place, and made the hyper-local all the more relevant. Today, E3 sits in a landscape of events most of which, bluntly, justify their existence far better than the ESA’s effort does. Huge local events in major markets around the world serve their audiences better than a remote event in LA; GamesCom in Germany and TGS in Tokyo remain the biggest of those, but there are also major events in other European, Asian and Latin American countries that balance serving the business community in their regions with putting on a huge show for local consumers.

In the United States, meanwhile, E3 finds itself assailed on two sides. The PAX events have become the region’s go-to consumer shows, and a flotilla of smaller shows cater well to specific business and social niches. GDC, meanwhile, has become the de facto place to do business and for the industry to engage in conversation and debate with itself. The margin in between those two for a “showcase show that’s not actually for consumers but sort-of lets some in and is a place for the industry to do business but also please spend a fortune on a gigantic impressive stand” is an increasingly narrow piece of ground to stand on, and E3 is quite distinctly losing its balance.

A big part of the reason for that is simply that E3 has an identity crisis. It wants to be a global show in the age of the local, in an age where “global” is accomplished by pointing a camera at a stage, not by flying people from around the world to sit in the audience. It wants to be a spectacle, and a place to do business, and ends up being dissatisfying at both; it wants to excite and intrigue consumers, but it doesn’t want to let them in. The half-measures attempted over the years to square these circles have done nothing to convince anyone that E3 knows how to stay relevant; slackening ties to allow more consumers into the show simply annoys people who are there for work, and annoys the huge audience of consumers who remain excluded. The proposed consumer showcase satellite event, too, will simply annoy companies who have to divide their attention, and annoy consumers who still feel like they’re not being let in to the “real thing”. Meanwhile the show itself feels more and more like the hole in the middle of a doughnut – all these huge conferences, showcases and events are arranged around E3′s dates, but people spend less and less time at the show proper, and with EA and Activision go two of the major reasons to do so. (It’s also hard not to note, though I can’t quantify it in figures, that more industry people each year seem to stay home and watch the conferences online rather than travelling to LA.)

The answer to E3′s problems has to be an update to its objectives; it has to be for the ESA to sit down with its membership (including those who have already effectively abandoned the show) and figure out what the point of the show is, and what it’s meant to accomplish. The E3 brand has enormous cachet and appeal among consumers; it’s hard to believe that there’s no demand for a massive showcase event at the LA Convention Center that actually threw its doors open to consumers, it’s simply a question of whether ESA members think that’s something they’d like to participate in. From a business perspective, I think they’d be mad not to; the week of E3, loaded with conferences and announcements, drives the industry’s most devoted fans wild, and getting a few hundred thousand of them to pass through a show floor on that week would be one of the most powerful drivers of early sentiment and word of mouth imaginable.

As for business; it’s not like there isn’t a tried, tested and long-standing model for combining business and consumer shows that doesn’t involve a half-baked compromise. Tons of shows around the world, in games and in other fields, open for a couple of trade days before throwing the doors open to consumers over the weekend. Other approaches may also be valid, but the point is that there’s a simple and much more satisfying answer than the daft, weak-kneed reforms the ESA has attempted (“let’s let exhibitors give show passes to a certain number of super-fan consumers” – really? Really?).

E3 week remains a big deal; E3 itself may be faltering and a bit unloved, but the week around it is pretty much the only truly global showcase the industry has created for itself. That week deserves to be served by a better core event, rather than inexorably moving towards being a ton of media events orbiting a show nobody can really be bothered with. The organizers at the ESA need to be brave, bold and aggressive with what they do with E3 in future – because just falling back on the comfortable old format is starting to show diminishing returns at an alarming rate.

Courtesy-GI.biz

 

U.S. And EU Agree To ‘Umbrella’ Data Protection Pact

June 7, 2016 by mphillips  
Filed under Around The Net

The  United States and European Commission have signed a landmark agreement, in its quest to legitimize the transatlantic flow of European Union citizens’ personal information.

No, it’s not the embattled Privacy Shield, which the Commission hopes to conclude later this month, but the rather flimsier-sounding umbrella agreement or, more formally, the U.S.-EU agreement “on the protection of personal information relating to the prevention, investigation, detection, and prosecution of criminal offenses.”

It covers the exchange between EU and U.S. law enforcers, during the course of their investigations of personal data including names, addresses and criminal records. U.S. Attorney General Loretta Lynch, European Commissioner for Justice Vĕra Jourová and Dutch Minister for Security and Justice Ard van der Steur signed the agreement in Amsterdam on Thursday.

One benefit of the agreement for EU citizens caught up in such investigations is that they will benefit from the same rights to judicial redress as U.S. citizens if a privacy breach occurs, thanks to the recently passed Judicial Redress Act.

A stumbling block on the way to the agreement was U.S. senators’ delay in approving the act after its approval by the House of Representatives.

The agreement won’t become part of international law until the European Parliament, noticeably critical of the Commission’s data protection plans in recent weeks, has given its approval.