Google is believed to be spending a small fortune getting content ready for the platform, particularly video games and apps, licensing sports leagues and shooting 360-degree videos.
Daydream is being hardwired into Android 7.0 which launched this week. Google says that Samsung, HTC, ZTE, Huawei, Xiaomi, Alcatel, Asus and LG had agreed to make “Daydream ready” smartphones.
Google wants the software to be the Android of VR. It will provide a VR platform and other outfits will create the hardware and its Android chums will configure their smartphones to run the beast. But while the product is nearly good to go, so far no one has put their hand up and said they will be making headsets specifically for the platform.
The VR market is getting crowded from Facebook, Sony, Samsung Electronics and HTC. However there are a limited number of apps and even fewer games. Sony’s Morpheus headset is tethered to its PlayStation video-game console, but Google is focused on lower quality mobile-based VR, whereby consumers snap their phones into a visor or headset. With the headset on, Daydream presents users with an array of apps, from YouTube to HBO Now.
While we were hoping to see it bundled with some recently launched Polaris-based graphics cards, it appears that AMD wants to give some love to those that decide to buy AMD’s FX-series CPUs.
To be available in most popular retail/e-tail stores, the bundle will include a copy of the new Deus Ex: Mankind Divided game with a purchase of a 6- or 8-core AMD FX CPU. According to details provided by AMD, the promotion will run from August 23rd to November 14th or until the supply lasts.
Currently, some of the hot AMD FX-series CPUs like the 6-core FX-6300 or 8-core FX-8320 are selling for as low as US $100 and US $130, so bundling a US $60 game sounds like a really good deal.
Hopefully, AMD will decide to bundle the game with some of its Polaris-based graphics cards after Deus Ex: Mankind Divided gets its DirectX 12 patch later in early September.
AMD has put TrueAudio Next onto Github as part of its LiquidVR SDK.
AMD is trying to tackle the same audio problems as targeted by Nvidia’s VRWorks Audio. The aim according to the brief is to:
“Create a scalable AMD technology that enables full real-time dynamic physics-based audio acoustics rendering, leveraging the powerful resources of AMD GPU Compute.”
In other words, it will give immersive audio alongside VR headsets and allow audio to catch up a bit with graphics.
Writing in the GPU Open blog, Carl Wakeland, a Fellow Design Engineer at AMD, said that the 2D screen had resulted in sound never really getting a look in. Some games had bought in 3D audio as a novelty but this could be a distraction. But head-mounted display “changes everything.”
AMD TrueAudio Next is a significant step towards making environmental sound rendering closer to real-world acoustics with the modelling of the physics that propagate sound – AKA auralisation.
The new AMD TrueAudio Next library is a high-performance, OpenCL-based real-time math acceleration library for audio, with special emphasis on GPU compute acceleration. But it is not perfect yet, although the fact it has real-time GPU compute backing it up it is pretty good, apparently.
Wakeland says that two primary algorithms need to be catered for – time-varying convolution (in the audio processing component) and ray-tracing (in the propagation component).
“On AMD Radeon GPUs, ray-tracing can be accelerated using AMD’s open-source FireRays library, and time-varying real-time convolution can be accelerated with the AMD TrueAudio Next library.”
AMD uses a new ‘CU Reservation’ feature to reserve some CUs for audio, as necessary, and the use of asynchronous compute.
Allwinner’s H8VR chipset could free VR headsets from being attached to smartphones or PCs. The H8VR can fit in a plastic or cardboard VR headset. It also has a CPU, 4K video-processing capabilities, memory and storage.
The H8VR is targeted at low-cost VR headsets. Allwinner’s chips are already being used in low-cost smartphones and tablets, and the company played a big role in driving down mobile-device prices. The H8VR could do the same for VR headsets.
One VR headset with the chip, the V3 All In One, is selling for $109.99 on Geekbuying’s website and is available for $129.99 on Aliexpress.
VR headsets like Oculus Rift and HTC Vive need to be attached to PCs due to heavy computing and power requirements. Mobile devices can also be plugged into headsets for VR.
Top chip makers aren’t developing specialized processors for independent VR or augmented reality headsets, with Qualcomm being an exception.
One popular stand-alone AR headset is Microsoft’s HoloLens, which uses Intel’s Cherry Trail processor. That chip, however, was made for tablets, not specifically for VR or AR. The VR strategies for Nvidia and AMD rely on their PC GPUs.
Meanwhile, Samsung’s new Galaxy Note 7 can be used for VR with a companion Gear VR headset. One of the chips used in Note 7 is Qualcomm’s Snapdragon 820, which is also being targeted at VR headsets. The chip has digital signal processors and a powerful GPU to amplify sounds, scenes and to recognize images, all of which improve VR.
That leads to a larger question: like Allwinner and Qualcomm, will the top chip makers consider a specialized VR chip, much like they do for tablets and smartphones? It’s a possibility, if VR headset shipments explodes. IDC projects9.6 million headsets to ship this year, reaching 64.8 million by 2020.
It takes a lot of money to make and sell microprocessors – it always has done and it probably always will, but it seems to some that Advanced Micro Devices (AMD) is burning an awful lot of that rare commodity.
That’s according to analysts at Kasteel Research which believes that AMD’s liquidity is deteriorating and it’s burning cash “like a high roller billionaire” – whatever that is.
Nevertheless, looking at the position as objectively as possible, AMD has several irons in its fire including one that will challenge Intel at the high end, just like the old Opteron days.
AMD, according to Kasteel, got an injection of $351 million from its JV with Nantong Fujitsu so it’s got about one billion in the bank. High roller billionaires typically have several billions to play with when they’re partying like it’s 1999.
It seems to need a billion dollars a quarter if it’s playing its cards right on the green baize.
Analysts at Kasteel describe AMD as “burning cash machine” and has a serious cash flow problem – compared to Netflix – which is, of course, a completely different kettle of fish.
To be fair to Kasteel, it does admit it is going to short AMD in the next two or three days.
As an outsider limey, I believe that this is what is called playing the stock market, and it’s a dangerous game and a different sort of gamble to the roll of the dice, to poker, or to playing billiards [what they?] for money.
Let’s face it, as the snap from AMD’s website shows, Dr Lisa Su knows that the only way her company will fare well is to take risks. And although AMD has always been the underdog to the Great Intel, it’s good for any corporation to have competition.
Otherwise we’d all be eating the same highly processed but probably extremely nutritive food, init?
AMD recently mentioned that it has built hardware directly with Samsung and there is a further option to tap the company in the future for product ramps.
Analyst Patrick Moorhead, of Moor Insights and Security made the announcement after AMD investors questions about where AMD was building most of its hardware became a little more pointed.
AMD has said that it has bought $75 million in wafers from GlobalFoundries in Q2, that number struck Moorhead and co as a bit on the small side.
Moorhead questioned AMD on the deal and was told:
“AMD has strong foundry partnerships and our primary manufacturing partners are GLOBALFOUNDRIES and TSMC. We have run some product at Samsung and we have the option of enabling production with Samsung if needed as part of the strategic collaboration agreement they have with GLOBALFOUNDRIES to deliver 14nm FinFET process technology capacity.”
If AMD has options to build at Samsung that could be a bad sign for GlobalFoundries. After all it only spun off the outfit because it wanted a more agile manufacturing partner. GlobalFoundries struggled with its customer base and AMD had to cancel its Krishna and Wichita parts and move to TSMC.
When GloFo canned its 20nm and 14nm XM nodes and licensed 14nm technology from Samsung only to experience delays with that too.
Getting more out of Samsung might not result in significant volumes but the option to do so will keep GloFo or TSMC clean if they run into ramping or yield problems. GloFo’s licensed version of Samsung’s 14nm could easily be done by Samsung.
A report from financial analysts Seeking Alpha has issued guidance on the share price of Advanced Micro Devices (AMD) and said the company’s outlook is quite bright.
The report said that only 11 months back AMD was one of the most shorted stocks in the USA largely as a result of falling revenues and losses.
But, said Bill Maurer at Seeking Alpha, all that has completely changed now. Analysts think that AMD’s share price is currently overvalued.
It all hangs on how well AMD performs when it releases its earnings next week.
The introduction of the RX 480 was supposed to help out on revenues but there’s a question mark over how well it’s contributed to the bottom line.
On the bright side, the arrangement it had with Nantong Microelectronics terminated in the quarter and that ended up meaning a net cash bonus of over $320 million.
The share price currently stands at over $5. AMD’s biggest phone the processors based on Zen architecture are promised to start shipping later this year. This should have an effect on the stock value.
Naples is a 32 Zen core Opteron with 64 threads. The 16 core Zen version with a BGA socket is codenamed Snowy Owl. AMD thinks that Snowy Owl will be a great match for the communication and network markets that needs a high performance 64-bit X86 CPU.
Snowy Owl has 16 cores and 32 threads, all based on 14nm FinFET Zen transistors. The processor supports up to 32MB of shared L3 cache. We also mentioned a processor cluster codenamed Zeppelin. This seems to be the key to the Zen architecture as more Zeppelin clusters are creating more core Opterons.
Each Zeppelin has eight Zen cores and each Zen core has 512KB dedicated L2 cache memory. Four Zen cores share 8MB of L3 memory making the total L3 cache size 16MB. Zeppelin (ZP) comes with PCIe Gen 3, SATA 3, 10GbE, sever controller Hub, AMD secure processor as well as the DDR4 Memory controller. AMD is using a super-fast coherent interconnect to create more than one Zeppelin core.
One Zeppelin cluster would make an 8 core, 16 thread CPU with 4MB L2 and 16MB L3 cache and in our case product codenamed Snowy owl has 16 cores, 32 threads 8MB of L2 (512KB x 16) and 32MB L3 (4x8MB).
The Snowy Owl with 16 cores uses a SP4 Multi Chip Module (MCM) BGA socket, while the Naples uses MCM based SP3. These two are not pin compatible but 16 and 8 core Zen based Opterons will fit in the same socket.
Snowy Owl has four independent memory channels and up to 64 lanes of PCIe Gen3. When it comes to storage, it supports up to 16 SATA or NVME storage channels and 8x10GbE for some super-fast networking solutions.
As you see, there will be plenty of Zen based Opteron possibilities and most of them will start showing up by mid-2017. The TDP Range for Snowy Owl is sub 100W and capable of sinking the TDP down to 35W. Yes, we do mean that there may well be a quad core Zen Opteron too.
Sony is over the hump. That’s the message that the company wanted investors and market watchers to understand from its presentations earlier this week. Though it expressed it in rather more finessed terms, the core of what Sony wanted to say was that the really hard part is over. Four years after Kaz Hirai took over the corporation; the transition – a grinding, grating process that involved thousands of job losses, the sale or shuttering of entire business units and protracted battles with the firm’s old guard – is over. The restructuring is done. Now it’s time for each business unit to knuckle down and focus on profitability.
It’s not all sunshine and rainbows, of course; even as Hirai was essentially declaring “Mission Complete” on Sony’s seemingly never-ending restructuring, the company noted that it’s expecting sales in its devices division (largely focused on selling Xperia smartphones) to decline this year, and there are concerns over soft demand for products from the imaging department, which provides the camera components for Apple’s iPhones among others. Overall, though, Sony is in a healthier condition than it’s been in for a long time – and it owes much of that robust health to PlayStation, with the games and network services division’s revenue targets rising by enough to make up for any weakness in other divisions.
When Hirai took over Sony, becoming the first person to complete the leap from running PlayStation to running Sony itself (Ken Kutaragi had long been expected to do so, but dropped the ball badly with PS3 and missed his opportunity as a consequence), it was widely expected that he’d make PlayStation into the core supporting pillar of a restructured Sony. That’s precisely what’s happened – but even Hirai, surely, couldn’t have anticipated the success of the PS4, which has shaved years off the firm’s financial recovery and given it an enviable hit platform exactly when it needed one most.
Looking into the detail of this week’s announcements, there was little that we didn’t already know in terms of actual product, but a lot to be read between the lines in terms of broad strategy. For a start, the extent of PlayStation’s role as the company’s “pillar” is becoming ever clearer. Aside from its importance in financial terms, Sony clearly sees PS4 as being a launchpad for other devices and services. PlayStation VR is the most obvious of those; it will start its lifespan as an added extra being sold to the PS4’s 40 million-odd customer base, and eventually, Sony hopes, will become a driver for additional PS4 sales in its own right. The same virtuous circle effect is hoped for PlayStation Vue, the TV service aimed at PlayStation-owning “cable cutters”, which has surpassed 100,000 subscribers and is said to be rapidly growing since its full-scale launch back in March.
Essentially, this means that two major Sony launches – its first major foray into VR and its first major foray into subscriber TV – are being treated as “PlayStation-first” launches. The company is also talking up non-gaming applications for PSVR, which it sees as a major factor from quite early on in the life cycle of the device, and is rolling out PlayStation Vue clients for other platforms – but it’s still very notable that PlayStation customers are being treated as the ultimate early adopter market for Sony’s new services and products.
To some degree, that explains the company’s desire to get PS4 Neo onto the market – though I maintain that a cross-department effort to boost sales of 4K TVs is also a key driving force there. In a wider sense, though, Neo is designed to make sure that the platform upon which so much of Sony’s future – games, network services, television, VR – is being based doesn’t risk all of those initiatives by falling behind the technology curve. Neo is, of course, a far less dramatic upgrade than Microsoft’s Scorpio; but that’s precisely because Sony has so much of its corporate strategy riding on PS4, while Microsoft, bluntly, has so little riding on Xbox One. Sony needs to keep its installed base happy while encouraging newcomers to buy into the platform in the knowledge that it’s reasonably up-to-date and future proof. Microsoft can afford to be rather more experimental and even reckless in its efforts to leapfrog the competition.
Perhaps the most impressive aspect of Sony’s manoeuvring thus far is that the company has managed to position the PlayStation as the foundation of such grand plans without making the mistake Microsoft made with the original Xbox One unveiling – ignoring games to the extent that the core audience questioned whether they were still the focus. PSVR is clearly designed for far more than just games, but the early focus on games has brought gamers along for every step of the journey. PlayStation Vue, though a major initiative for Sony as a whole, is a nice extra for PlayStation owners, not something that seems to dilute the brand and its focus. On the whole, there’s no sign that PlayStation’s new role at the heart of Sony is making its core, gaming audience love it any less.
On the contrary; if PlayStation Plus subscriptions are any measure, PlayStation owners seem a pretty happy bunch. Subscriptions topped 20 million recently, according to the firm’s presentation this week, which means that over 50% of PS4’s installed base is now paying a recurring subscription fee to Sony. PlayStation Plus is relatively cheap, but that’s still a pretty big chunk of cash once you add it up – it equates to an additional three or four games in the consoles attach ratio over its lifetime, which is nothing to be sniffed at, and will likely increase the profitability of the console by quite a few percentage points. In Andrew House’s segment of this week’s presentation, he noted that the division is shifting from a packaged model towards a recurring payments model; PlayStation Plus is only one step on that journey and it’s extremely unlikely that the packaged model (be it digital or a physical package) will go away any time soon, but it does suggest a future vision in which a bundle of subscriptions – for games, TV, VR content and perhaps others – makes up the core of many customers’ transactions with Sony.
That the truly painful part of Sony’s transition is over is to be celebrated – a healthy Sony is a very good thing for the games business, and we should all be hoping Nintendo gets back on its feet soon too. The task of the company, however, isn’t necessarily about to get any easier. PS4’s extraordinary success needs to be sustained and grown, and while early signs are good, the whole idea of using PlayStation as a launchpad for Sony’s other businesses remains an unproven model with a shaky track record (anyone remember the ill-fated PSX, a chunky white PVR with a PS2 built into it that was supposed to usher in an era of PlayStation-powered Sony consumer electronics?). But with supportive leadership, strong signs of cooperation from other parts of the company (the first-party Spiderman game unveiled at E3 is exactly the kind of thing the relationship between PlayStation and Sony Pictures should have been yielding for decades) and a pipeline of games that should keep fans delighted along the way, PlayStation is in the strongest place it’s been for over a decade.
While Sony wowed gamers at its E3 press conference this year with a barrage of impressive content, some would argue that it was Microsoft that made the biggest splash by choosing its press conference to announce not one, but two distinct console hardware upgrades that would be hitting the market in consecutive years (Xbox One S this year, Scorpio in 2017). Years from now, this may be the grand moment that we all point to as forever changing the evolution of the console business. Sony, too, is preparing a slight upgrade to PS4 with the still-to-be-unveiled Neo, and while it won’t be as powerful as Scorpio, it’s not a stretch to assume that Sony is already working on the next, more powerful PlayStation iteration as well. We can all kiss the five or six-year console cycle goodbye now, but the publishers we spoke to at E3 all believe that this is ultimately great for the console industry and the players.
The most important aspect of all of this is the way in which Sony and Microsoft intend to handle their respective audiences. Both companies have already said that players of the older hardware will not be left behind. The ecosystem will carry on, and that to EA global publishing chief Laura Miele is a very good thing, indeed.
“I perceive it as upgrades to the hardware that will actually extend the cycle,” she told me. “I actually see it more as an incredibly positive evolution of the business strategy for players and for our industry and definitely for EA. The idea that we would potentially not have an end of cycle and a beginning of cycle I think is a positive place for our industry to be and for all of the commercial partners as well as players.
“I have an 11-year-old son who plays a lot of games. We changed consoles and there are games and game communities that he has to leave behind and go to a different one. So he plays on multiple platforms depending on what friends he’s playing with and which game he’s going to play. So the idea that you have a more streamlined thoroughfare transition I think is a big win… things like backwards compatibility and the evolution,” she continued.
“So it’s not my perception that the hardware manufacturers are going to be forcing upgrades. I really see that they’re trying to hold on and bring players along. If players want to upgrade, they can. There will be benefit to that. But it’s not going to be punitive if they hold on to the older hardware… So we’re thrilled with these announcements. We’re thrilled with the evolution. We’re thrilled with what Sony’s doing, what Microsoft’s doing and we think it’s phenomenal. I think that is good for players. It’ll be great for us as a publisher about how they’re treating it.”
Ubisoft’s head of EMEA Alain Corre is a fan of the faster upgrade approach as well. “The beautiful thing is it will not split the communities. And I think it’s important that when you’ve been playing a game for a lot of years and invested a lot of time that you can carry on without having to start over completely again. I think with the evolution of technology it’s better than what we had to do before, doing a game for next-gen and a different game from scratch for the former hardware. Now we can take the best of the next console but still have super good quality for the current console, without breaking the community up. We are quite big fans of this approach,” he said.
Corre also noted that Ubisoft loves to jump on board new technologies early (as it’s done for Wii, Kinect, VR and now Nintendo NX with Just Dance), and its studios enjoy being able to work with the newest tech out there. Not only that, but the new consoles often afford publishers the opportunity to build out new IP like Steep, he said.
“Each time there’s a new machine with more memory then our creators are able to bring something new and fresh and innovate, and that’s exciting for our fans who always want to be surprised. So the fact that Microsoft announced that they want to move forward to push the boundaries of technology again is fantastic news. Our creators want to go to the limit of technology to make the best games they can… so the games will be better in the years to come which is fantastic for this industry. And at Ubisoft, it’s also in our DNA to be [supportive] early on with new technology. We like taking some risks in that respect… We believe in new technology and breaking the frontiers and potentially attracting new fans and gamers into our ecosystem and into our brands,” Corre continued.
Take-Two boss Strauss Zelnick pointed out the continuity in the communities as well. “The ecosystems aren’t shifting as much. We essentially have a common development architecture now that’s essentially a PC architecture,” he said. And if the console market truly is entering an almost smartphone like upgrade curve, “It would be very good for us obviously. To have a landscape…where you put a game out and you don’t worry about it,” he commented, “the same way that when you make a television show you don’t ask yourself ‘what monitor is this going to play on?’ It could play on a 1964 color television or it could play on a brand-new 4K television, but you’re still going to make a good television show.
“So we will for sure get there as an industry. We will get to the point where the hardware becomes a backdrop. And sure, constantly more powerful hardware gives us an opportunity but it would be great to get to a place where we don’t have a sine curve anymore, and I do see the sine curve flattening but I’m not sure I agree it’s going away yet… That doesn’t change any of our activities; we still have to make the very best products in the market and we have to push technology to its absolute limit to do so.”
With its aggressive pricing move on the Radeon RX 480, AMD has no choice but to continue the same thing with the lower positioned Radeon RX 470, which should end up with a US $149 MSRP for the 4GB version.
Rumored to be based on 14nm Polaris architecture Ellesmere Pro GPU with 32 Compute Units and 2048 Stream Processors, which is a significant drop from 2304 Stream Processors on the Ellesmere XT-based Radeon RX 480, the Radeon RX 470 should end up with US $149 MSRP for the 4GB and US $179 MSRP for the 8GB version.
According to recent rumors, the GDDR5 memory on the RX 470 should be also clocked at lower 7,000MHz, pushing 224GB/s of total memory bandwidth while the TDP should end up at 110W.
With these specifications, the Radeon RX 470 could end up faster than Radeon R9 380X, which means it could be a perfect choice for 1080p gamers.
AMD Radeon RX 480 should launch today June 29th, while the rest of the lineup, including Radeon RX 470 and RX 460 could come a bit later.
Researchers at the University of California, Davis, Department of Electrical and Computer Engineering have developed 1000-core processor which will eventually be put onto the commercial market.
The team, from t developed the energy-efficient 621 million transistor “KiloCore” chip so that it could manage 1.78 trillion instructions per second and since the project has IBM’s backing it could end up in the shops soon.
Team leader Bevan Baas, professor of electrical and computer engineering said that it could be the world’s first 1,000-processor chip and it is the highest clock-rate processor ever designed in a university.
While other multiple-processor chips have been created, none exceed about 300 processors. Most of those were created for research purposes and few are sold commercially. IBM, using its 32 nm CMOS technology, fabricated the KiloCore chip and could make a production run if required.
Because each processor is independently clocked, it can shut itself down to further save energy when not needed, said graduate student Brent Bohnenstiehl, who developed the principal architecture. Cores operate at an average maximum clock frequency of 1.78 GHz, and they transfer data directly to each other rather than using a pooled memory area that can become a bottleneck for data.
The 1,000 processors can execute 115 billion instructions per second while dissipating only 0.7 Watts which mean it can be powered by a single AA battery. The KiloCore chip executes instructions more than 100 times more efficiently than a modern laptop processor.
The processor is already adapted for wireless coding/decoding, video processing, encryption, and others involving large amounts of parallel data such as scientific data applications and datacentre work.
E3 2016 has officially come to a close, and despite the fact that Activision and EA were absent from the show floor, my experience of the show was that it was actually quite vibrant and filled with plenty of intricate booth displays and compelling new games to play. The same cannot be said for the ESA’s first ever public satellite event, E3 Live, which took place next door at the LA Live complex. The ESA managed to give away 20,000 tickets in the first 24 hours after announcing the show in late May. But as the saying goes, you get what you pay for…
The fact that it was a free event, however, does not excuse just how poor this show really was. Fans were promised by ESA head Mike Gallagher in the show’s initial announcement “the chance to test-drive exciting new games, interact with some of their favorite developers, and be among the first in the world to enjoy groundbreaking game experiences.”
I spent maybe an hour there, and when I first arrived, I genuinely questioned whether I was in the right place. But to my disbelief, the small area (maybe the size of two tennis courts) was just filled with a few tents, barely any games, and a bunch of merchandise (t-shirts and the like) being marketed to attendees. The fans I spoke with felt like they had been duped. At least they didn’t pay for their tickets…
“When we found out it was the first public event, we thought, ‘Cool we can finally go to something E3 related’ because we don’t work for any of the companies and we’re not exhibitors, and I was excited for that but then we got here and we were like ‘Uh oh, is this it?’ So we got worried and we’re a little bit upset,” he continued. Malcolm added that he thought it was going to be in one of the buildings right in the middle of the LA Live complex, rather than a siphoned off section outside with tents.
As I walked around, it was the same story from attendees. Jose, who came with his son, felt similarly to Malcolm. “It’s not that big. I expected a lot of demos, but they only had the Lego Dimensions demo. I expected something bigger where we could play some of the big, upcoming titles. All it is is some demo area with Lego and some VR stuff,” he told me.
When I asked him if he got what he thought would be an E3 experience, he continued, “Not even close, this is really disappointing. It’s really small and it’s just here. I expected more, at least to play some more. And the VR, I’m not even interested in VR. Me and my son have an Xbox One and we wanted to play Battlefield 1 or Titanfall 2 and we didn’t get that opportunity. I was like c’mon man, I didn’t come here to buy stuff. I came here to enjoy games.”
By cobbling together such a poor experience for gamers, while 50,000 people enjoy the real E3 next door, organizers risk turning off the very audience that they should be welcoming into the show with open arms. As the major publishers told me this week, E3 is in a transitional period and needs to put players first. That’s why EA ultimately hosted its own event, EA Play. “We’re hoping the industry will shift towards players. This is where everything begins and ends for all of us,” said EA global publishing chief Laura Miele.
It seems like a no-brainer to start inviting the public, and that’s what we all thought was happening with E3 Live, but in reality they were invited to an atmosphere and an “experience” – one that barely contained games. The good news, as the quickly sold out E3 Live tickets indicated, is that there is a big demand for a public event. And it shouldn’t be very complicated to pull off. If the ESA sells tickets, rather than giving them away, they can generate a rather healthy revenue stream. Give fans an opportunity to check out the games for a couple days and let the real industry conduct its business on a separate 2-3 days. That way, the ESA will be serving both constituents and E3 will get a healthy boost. And beyond that, real professionals won’t have to worry anymore about getting shoved or trampled, which nearly happened to me when a legion of frenzied gamers literally all started running into West Hall as the show floor opened at 10AM. Many of these people are clearly not qualified and yet E3 allows them to register. It’s time to make E3 more public and more professional. It’s your move ESA.
We asked the ESA to provide comment on the reception to E3 Live but have not received a response. We’ll update this story if we get a statement.
AMD’s Zen chip will have as much as 32 cores, 64 threads and more L3 cache than you can poke a stick at.
Codenamed Naples, the chip uses the Zen architecture. Each Zen core has its own dedicated 512kb cache. A cluster [shurely that should be cloister.ed] of Zen cores shares a 8MB L3 cache which makes the total amount of L3 shared cache 64MB. This is a big chip and of course there will be a 16 core variant.
This will be a 14nm FinFET product manufactured in GlobalFoundries and supporting the X86 instruction set. Naples has eight independent memory channels and up to 128 lanes of gen 3 PCIe. This makes it suitable for fast NVMO memory controllers and drives. Naples also support up to 32 SATA or NVME drives.
If you like the fast network interface, Naples supports 16x10GbE and the controller is integrated, probably in the chipset. Naples is using SP3 LGA server socket.
The first Zen based server / enterprise products will range between a modest 35W TDP to a maximum of 180W TDP for the fastest ones.
There will be dual, quad, sixteen and thirty-two core server versions of Zen, arriving at different times. Most of them will launch in 2017 with a possibility of very late 2016 introduction.
It is another one of those Fudzilla told you so moments. We have already revealed a few Zen based products last year. The Zen chip with Greenland / Vega HBM2 powered GPU with HSA support will come too, but much later.
Lisa Su, AMD’s CEO told Fudzilla that the desktop version will come first, followed by server, notebook and finally embedded. If that 40 percent IPC happens to be across more than just a single task, AMD has a chance of giving Intel a run for its money.
This weeks E3 won’t be entirely dominated by VR, as some events over the past year have been; there’s too much interest in the prospect of new console hardware from all the major players and in the AAA line-up as this generation hits its stride for VR to grab all the headlines. Nonetheless, with both Rift and Vive on the market and PSVR building up to an autumn launch, VR is still likely to be the focus of a huge amount of attention and excitement at and around E3.
Part of that is because everyone is still waiting to see exactly what VR is going to be. We know the broad parameters of what the hardware is and what it can do – the earliest of early adopters even have their hands on it already – but the kind of experiences it will enable, the audiences it will reach and the way it will change the market are still totally unknown. The heightened interest in VR isn’t just because it’s exciting in its own right; it’s because it’s unknown, and because we all want to see the flashes of inspiration that will come to define the space.
One undercurrent to look out for at E3 is one that the most devoted fans of VR will be deeply unhappy with, but one which has been growing in strength and confidence in recent months. There’s a strong view among quite a few people in the industry (both in games and in the broader tech sector) that VR isn’t going to be an important sector in its own right. Rather, its importance will be as a stepping stone to the real holy grail – Augmented or Mixed Reality (AR / MR), a technology that’s a couple of years further down the line but which will, in this vision of the future, finally reach the mainstream consumer audience that VR will never attain.
The two technologies are related but, in practical usage, very different. VR removes the user from the physical world and immerses them entirely in a virtual world, taking over their visual senses entirely with closed, opaque goggles. AR, on the other hand, projects additional visual information onto transparent goggles or glasses; the user still sees the real world around them, but an AR headset adds an extra, virtual layer, ranging from something as simple as a heads-up display (Google’s ill-fated Glass was a somewhat clunky attempt at this) to something as complex as 3D objects that fit seamlessly into your reality, interacting realistically with the real objects in your field of vision. Secretive AR headset firm Magic Leap, which has raised $1.4 billion in funding but remains tight-lipped about its plans, prefers to divide the AR space into Augmented Reality (adding informational labels or heads-up display information to your vision) and Mixed Reality (which adds 3D objects that sit seamlessly alongside real objects in your environment).
The argument I’m hearing increasingly often is that while VR is exciting and interesting, it’s much too limited to ever be a mainstream consumer product – but the technology it has enabled and advanced is going to feed into the much bigger and more important AR revolution, which will change how we all interact with the world. It’s not what those who have committed huge resources to VR necessarily want to hear, but it’s a compelling argument, and one that’s worthy of consideration as we approach another week of VR hype.
The reasoning has two basis. The first is that VR isn’t going to become a mainstream consumer product any time soon, a conclusion based off a number of well-worn arguments that will be familiar to anyone who’s followed the VR resurgence and which have yet to receive a convincing rebuttal – other than an optimistic “wait and see”. The first is that VR simply doesn’t work well enough for a large enough proportion of the population for it to become a mainstream technology. Even with great frame-rate and lag-free movement tracking, some aspects of VR simply make it induce nausea and dizziness for a decent proportion of people. One theory is that it’s down to the fact that VR only emulates stereoscopic depth perception, i.e. the difference in the image perceived by each eye, and can’t emulate focal depth perception, i.e. the physical focusing of your eye on objects different distances from you; for some people the disparity between those two focusing mechanisms isn’t a problem, while for others, it makes them feel extremely sick.
Another theory is that it’s down to a proportion of the population getting nauseous from physical acceleration and movement not matching up with visual input, rather like getting motion sick in a car or bus. In fact, both of those things probably play a role; either way, the result is that a sizeable minority of people feel ill almost instantly when using VR headsets, and a rather more sizeable number feel dizzy and unwell after playing for extended periods of time. We won’t know just how sizeable the latter minority is until more people actually get a chance to play VR for extended periods; it’s worth bearing in mind once again that the actual VR experiences most people have had to date have been extremely short demos, on the order of 3 to 5 minutes long.
The second issue is simply a social one. VR is intrinsically designed around blocking out the world around you, and that limits the contexts in which it can be used. Being absorbed in a videogame while still aware of the world and the people around you is one thing; actually blocking out that world and those people is a fairly big step. In some contexts it simply won’t work at all; for others, we’re just going to have to wait and see how many consumers are actually willing to take that step on a regular basis, and your take on whether it’ll become a widespread, mainstream behaviour or not really is down to your optimism about the technology.
With AR, though, both of these problems are solved to some extent. You’re still viewing the real world, just with extra information in it, which ought to make the system far more usable even for those who experience motion sickness or nausea from VR (though I do wonder what happens regarding focal distance when some objects appear to be at a certain position in your visual field, yet exist at an entirely different focal distance from your eyes; perhaps that’s part of what Magic Leap’s secretive technology solves). Moreover, you’re not removed from the world any more than you would be when using a smartphone – you can still see and interact with the people and objects around you, while also interacting with virtual information. It may look a little bit odd in some situations, since you’ll be interacting with and looking at objects that don’t exist for other people, but that’s a far easier awkwardness to overcome than actually blocking off the entire physical world.
What’s perhaps more important than this, though, is what AR enables. VR lets us move into virtual worlds, sure; but AR will allow us to overlay vast amounts of data and virtual objects onto the real world, the world that actually matters and in which we actually live. One can think of AR as finally allowing the huge amounts of data we work with each day to break free of the confines of the screens in which they are presently trapped; both adding virtual objects to our environments, and tagging physical objects with virtual data, is a logical and perhaps inevitable evolution of the way we now work with data and communications.
While the first AR headsets will undoubtedly be a bit clunky (the narrow field of view of Microsoft’s Hololens effort being a rather off-putting example), the evolutionary path towards smaller, sleeker and more functional headsets is clear – and once they pass a tipping point of functionality, the question of “VR or AR” will be moot. VR is, at best, a technology that you dip into for entertainment for an hour here and there; AR, at its full potential, is something as transformative as PCs or smartphones, fundamentally changing how pretty much everyone interacts with technology and information on a constant, hourly, daily basis.
Of course, it’s not a zero sum game – far from it. The success of AR will probably be very good for VR in the long term; but if we see VR now as a stepping stone to the greater goal of AR, then we can imagine a future for VR itself only as a niche within AR. AR stands to replace and re imagine much of the technology we use today; VR will be one thing that AR hardware is capable of, perhaps, but one that appeals only to a select audience within the broad, almost universal adoption of AR-like technologies.
This is the vision of the future that’s being articulated more and more often by those who work most closely with these technologies – and while it won’t (and shouldn’t) dampen enthusiasm for VR in the short term, it’s worth bearing in mind that VR isn’t the end-point of technological evolution. It may, in fact, just be the starting point for something much bigger and more revolutionary – something that will impact the games and tech industries in a way even more profound than the introduction of smartphones.