It’s been more than five years since The NPD Group said it would start including digital data in its monthly reports on the US video game business. In those five years, not only has digital grown, but publishers, analysts, press and more have all thrown shade at NPD, questioning the relevancy of a service that only offers physical sales data in an increasingly digital era. Today, NPD is finally taking that first step to offer a more complete picture of the entire games market as it’s unveiled its digital point-of-sale (POS) sourced service, tracking SKU-level sales data on digital games.
“Following several years of beta testing, the Digital Games Tracking Service will allow participating clients to understand the size and growth of the digital market, and analyze attach rates and other important metrics. Combined with physical data available by NPD, these clients can gain a better understanding of the interplay between the physical and digital sales channels,” the firm explained in a press statement.
“As has been experienced across a wide variety of industries, digital has made a big impact on the overall gaming market, and we’ve risen to meet the demand for a reporting mechanism that tracks those sales in a timely and accurate way,” said Joanne Hageman, President, U.S. Toys & Games, The NPD Group. “With the participation and support of leading publishers – whose cooperation makes this possible – we are excited to launch an industry-first service that addresses a long-standing need.”
The usual report on physical sales data will now be combined with digital sales data and issued on July 21 instead of July 14; it’s expected to follow that cadence (the third data Thursday of the month) moving forward. Initially, NPD has gained the support of major publishers like EA, Activision, Ubisoft, Capcom, Square Enix, Take-Two, Deep Silver and Warner Bros. There are notable exceptions, however, like Bethesda as well as first-party publishers like Microsoft, Sony and Nintendo, but NPD analyst Liam Callahan promised that more publishers would be signing on as the service evolves.
“This has been several years of beta testing and we’ve been doing this in partnership with publishers, shaping the product, encoding the data the way the industry wants to see it. It’s really at the behest of or on the behalf of the publishers that we’re moving forward with this announcement… Really the goal is to bring a new level of transparency never before seen, at least in the US market. This is really the first step. We recognize that there’s still a ways to go, we want more publishers to join, we want to be able to project for people who are not participating. It’s an evolution, it’s something that takes time and our philosophy was really to start – if we waited to have every publisher in the world to sign up it would take forever. We’ll be improving this as time goes on,” he said.
Importantly, NPD will notate next to game titles on the chart that do not include digital data. Callahan wants the service, which is being produced with the assistance of EEDAR, to ultimately be able to project data even for non-participants but NPD isn’t starting with that ability just yet. Instead, it’ll focus on tracking revenue from full-game downloads across Xbox Live, PlayStation Network and Steam. Services like Battle.net and Uplay won’t be included at this point.
“EEDAR is excited to be part of this initiative with NPD and the participating publishers. Tracked digital revenues have seen annual growth of over 100% each year since 2012. In 2016, we’ve already tracked more digital revenue than we saw in 2012 and 2013 combined. This initiative is a great milestone for the industry which will allow publishers to make better business decisions with a broader data set,” added EEDAR CEO Rob Liguori.
Add-on content like DLC and microtransactions will be tracked as well, but that data will only be released to participants, not the media and public. “We’re waiting until that’s a little more fully baked for us to roll that out to the media. We’re doing things in stages,” Callahan said.
It may be frustrating for the media to not have a granular breakdown at the SKU level to see what portion of a game’s sales are digital versus physical, but NPD anticipates more openness as the service evolves.
NPD communications chief David Riley commented, “This is a closed service, the detailed data is only available to participants so if you’re a non-participating publisher you cannot see the data. The fact that we’re allowed to go out with something for the media is a huge step in the right direction. I think as the service matures and as the publishers get used to it and we get more on board, we have more history, we do some benchmarking, we can provide that, but what we wanted to do for multiple reasons, including appeasing the publishers was to combine full-game physical with full-game digital, keep away from the DLC, keep PC games separate because that’s a whole different ball of wax. It’s not comprehensive, but it’s the most comprehensive, we’re the first in the market to track this and we’re sort of very cautious.”
He added, “I expect a good old slamming from the industry press because of the limitations here but what we don’t want to do is open ourselves up by separating it at this time. We’ve just opened the gates right now. Just as you’ve seen a withdrawal [of data] on the physical side – we used to give units – this is sort of going to be the reverse I’m hoping and we can provide more over time.”
Working with the publishers is great, but there are numerous digitally released titles from indies which make up a growing piece of the industry pie. Will the service grow to track those titles too? “Indies are a big part of the industry in terms of their innovation and I think when I talk about our projection methodology and assets at NPD, that is part of how we can track everything, not just for publishers, including indie games and everything that’s outside the panel right now,” Callahan said.
“Some of those smaller games are published through a publisher or first-party so there are ways to get some of those with our publisher-sourced methodology, and otherwise we’re approaching it with developing a robust projection methodology. That’s certainly part of our plan, we’re not going to ignore the indie piece.”
In our previous conversations with NPD, the firm had hinted at possibly working towards the goal of global digital reports. That’s not off the table, but it’s not a focus at the moment. “US is our core competency… our vision is to expand this as much as we can in a way that makes sense for our partners. If that’s global that may be what we pursue. But we also want to do the best job that we can in projecting for the market and recruiting as many publishers as we can,” Callahan concluded.
Sony is over the hump. That’s the message that the company wanted investors and market watchers to understand from its presentations earlier this week. Though it expressed it in rather more finessed terms, the core of what Sony wanted to say was that the really hard part is over. Four years after Kaz Hirai took over the corporation; the transition – a grinding, grating process that involved thousands of job losses, the sale or shuttering of entire business units and protracted battles with the firm’s old guard – is over. The restructuring is done. Now it’s time for each business unit to knuckle down and focus on profitability.
It’s not all sunshine and rainbows, of course; even as Hirai was essentially declaring “Mission Complete” on Sony’s seemingly never-ending restructuring, the company noted that it’s expecting sales in its devices division (largely focused on selling Xperia smartphones) to decline this year, and there are concerns over soft demand for products from the imaging department, which provides the camera components for Apple’s iPhones among others. Overall, though, Sony is in a healthier condition than it’s been in for a long time – and it owes much of that robust health to PlayStation, with the games and network services division’s revenue targets rising by enough to make up for any weakness in other divisions.
When Hirai took over Sony, becoming the first person to complete the leap from running PlayStation to running Sony itself (Ken Kutaragi had long been expected to do so, but dropped the ball badly with PS3 and missed his opportunity as a consequence), it was widely expected that he’d make PlayStation into the core supporting pillar of a restructured Sony. That’s precisely what’s happened – but even Hirai, surely, couldn’t have anticipated the success of the PS4, which has shaved years off the firm’s financial recovery and given it an enviable hit platform exactly when it needed one most.
Looking into the detail of this week’s announcements, there was little that we didn’t already know in terms of actual product, but a lot to be read between the lines in terms of broad strategy. For a start, the extent of PlayStation’s role as the company’s “pillar” is becoming ever clearer. Aside from its importance in financial terms, Sony clearly sees PS4 as being a launchpad for other devices and services. PlayStation VR is the most obvious of those; it will start its lifespan as an added extra being sold to the PS4’s 40 million-odd customer base, and eventually, Sony hopes, will become a driver for additional PS4 sales in its own right. The same virtuous circle effect is hoped for PlayStation Vue, the TV service aimed at PlayStation-owning “cable cutters”, which has surpassed 100,000 subscribers and is said to be rapidly growing since its full-scale launch back in March.
Essentially, this means that two major Sony launches – its first major foray into VR and its first major foray into subscriber TV – are being treated as “PlayStation-first” launches. The company is also talking up non-gaming applications for PSVR, which it sees as a major factor from quite early on in the life cycle of the device, and is rolling out PlayStation Vue clients for other platforms – but it’s still very notable that PlayStation customers are being treated as the ultimate early adopter market for Sony’s new services and products.
To some degree, that explains the company’s desire to get PS4 Neo onto the market – though I maintain that a cross-department effort to boost sales of 4K TVs is also a key driving force there. In a wider sense, though, Neo is designed to make sure that the platform upon which so much of Sony’s future – games, network services, television, VR – is being based doesn’t risk all of those initiatives by falling behind the technology curve. Neo is, of course, a far less dramatic upgrade than Microsoft’s Scorpio; but that’s precisely because Sony has so much of its corporate strategy riding on PS4, while Microsoft, bluntly, has so little riding on Xbox One. Sony needs to keep its installed base happy while encouraging newcomers to buy into the platform in the knowledge that it’s reasonably up-to-date and future proof. Microsoft can afford to be rather more experimental and even reckless in its efforts to leapfrog the competition.
Perhaps the most impressive aspect of Sony’s manoeuvring thus far is that the company has managed to position the PlayStation as the foundation of such grand plans without making the mistake Microsoft made with the original Xbox One unveiling – ignoring games to the extent that the core audience questioned whether they were still the focus. PSVR is clearly designed for far more than just games, but the early focus on games has brought gamers along for every step of the journey. PlayStation Vue, though a major initiative for Sony as a whole, is a nice extra for PlayStation owners, not something that seems to dilute the brand and its focus. On the whole, there’s no sign that PlayStation’s new role at the heart of Sony is making its core, gaming audience love it any less.
On the contrary; if PlayStation Plus subscriptions are any measure, PlayStation owners seem a pretty happy bunch. Subscriptions topped 20 million recently, according to the firm’s presentation this week, which means that over 50% of PS4’s installed base is now paying a recurring subscription fee to Sony. PlayStation Plus is relatively cheap, but that’s still a pretty big chunk of cash once you add it up – it equates to an additional three or four games in the consoles attach ratio over its lifetime, which is nothing to be sniffed at, and will likely increase the profitability of the console by quite a few percentage points. In Andrew House’s segment of this week’s presentation, he noted that the division is shifting from a packaged model towards a recurring payments model; PlayStation Plus is only one step on that journey and it’s extremely unlikely that the packaged model (be it digital or a physical package) will go away any time soon, but it does suggest a future vision in which a bundle of subscriptions – for games, TV, VR content and perhaps others – makes up the core of many customers’ transactions with Sony.
That the truly painful part of Sony’s transition is over is to be celebrated – a healthy Sony is a very good thing for the games business, and we should all be hoping Nintendo gets back on its feet soon too. The task of the company, however, isn’t necessarily about to get any easier. PS4’s extraordinary success needs to be sustained and grown, and while early signs are good, the whole idea of using PlayStation as a launchpad for Sony’s other businesses remains an unproven model with a shaky track record (anyone remember the ill-fated PSX, a chunky white PVR with a PS2 built into it that was supposed to usher in an era of PlayStation-powered Sony consumer electronics?). But with supportive leadership, strong signs of cooperation from other parts of the company (the first-party Spiderman game unveiled at E3 is exactly the kind of thing the relationship between PlayStation and Sony Pictures should have been yielding for decades) and a pipeline of games that should keep fans delighted along the way, PlayStation is in the strongest place it’s been for over a decade.
In what it calls a drive further into the data centre market. Cavium has entered into a definitive agreement to acquire all outstanding QLogic common stock in a deal worth approximately $1.36 billion.
The acquisition adds Qlogic’s Fibre Channel and Ethernet controllers and boards to Cavium’s line up of communications, security and general-purpose processors making Cavium a full-line supplier to data centres.
It also means that Cavium can take on storage and networking with Broadcom, Intel and Mellanox. The deal also gives Cavium a mature software stack in storage and networking and operational savings expected to amount to $45 million a year by the end of 2017.
Both companies sell to server makers and large data centers with a customer overlap of more than 60 per cent. Qlogic’s customer base is highly concentrated with nearly 60 per cent of its business for the last several years to HP, Dell and IBM.
E3 2016 has officially come to a close, and despite the fact that Activision and EA were absent from the show floor, my experience of the show was that it was actually quite vibrant and filled with plenty of intricate booth displays and compelling new games to play. The same cannot be said for the ESA’s first ever public satellite event, E3 Live, which took place next door at the LA Live complex. The ESA managed to give away 20,000 tickets in the first 24 hours after announcing the show in late May. But as the saying goes, you get what you pay for…
The fact that it was a free event, however, does not excuse just how poor this show really was. Fans were promised by ESA head Mike Gallagher in the show’s initial announcement “the chance to test-drive exciting new games, interact with some of their favorite developers, and be among the first in the world to enjoy groundbreaking game experiences.”
I spent maybe an hour there, and when I first arrived, I genuinely questioned whether I was in the right place. But to my disbelief, the small area (maybe the size of two tennis courts) was just filled with a few tents, barely any games, and a bunch of merchandise (t-shirts and the like) being marketed to attendees. The fans I spoke with felt like they had been duped. At least they didn’t pay for their tickets…
“When we found out it was the first public event, we thought, ‘Cool we can finally go to something E3 related’ because we don’t work for any of the companies and we’re not exhibitors, and I was excited for that but then we got here and we were like ‘Uh oh, is this it?’ So we got worried and we’re a little bit upset,” he continued. Malcolm added that he thought it was going to be in one of the buildings right in the middle of the LA Live complex, rather than a siphoned off section outside with tents.
As I walked around, it was the same story from attendees. Jose, who came with his son, felt similarly to Malcolm. “It’s not that big. I expected a lot of demos, but they only had the Lego Dimensions demo. I expected something bigger where we could play some of the big, upcoming titles. All it is is some demo area with Lego and some VR stuff,” he told me.
When I asked him if he got what he thought would be an E3 experience, he continued, “Not even close, this is really disappointing. It’s really small and it’s just here. I expected more, at least to play some more. And the VR, I’m not even interested in VR. Me and my son have an Xbox One and we wanted to play Battlefield 1 or Titanfall 2 and we didn’t get that opportunity. I was like c’mon man, I didn’t come here to buy stuff. I came here to enjoy games.”
By cobbling together such a poor experience for gamers, while 50,000 people enjoy the real E3 next door, organizers risk turning off the very audience that they should be welcoming into the show with open arms. As the major publishers told me this week, E3 is in a transitional period and needs to put players first. That’s why EA ultimately hosted its own event, EA Play. “We’re hoping the industry will shift towards players. This is where everything begins and ends for all of us,” said EA global publishing chief Laura Miele.
It seems like a no-brainer to start inviting the public, and that’s what we all thought was happening with E3 Live, but in reality they were invited to an atmosphere and an “experience” – one that barely contained games. The good news, as the quickly sold out E3 Live tickets indicated, is that there is a big demand for a public event. And it shouldn’t be very complicated to pull off. If the ESA sells tickets, rather than giving them away, they can generate a rather healthy revenue stream. Give fans an opportunity to check out the games for a couple days and let the real industry conduct its business on a separate 2-3 days. That way, the ESA will be serving both constituents and E3 will get a healthy boost. And beyond that, real professionals won’t have to worry anymore about getting shoved or trampled, which nearly happened to me when a legion of frenzied gamers literally all started running into West Hall as the show floor opened at 10AM. Many of these people are clearly not qualified and yet E3 allows them to register. It’s time to make E3 more public and more professional. It’s your move ESA.
We asked the ESA to provide comment on the reception to E3 Live but have not received a response. We’ll update this story if we get a statement.
This weeks E3 won’t be entirely dominated by VR, as some events over the past year have been; there’s too much interest in the prospect of new console hardware from all the major players and in the AAA line-up as this generation hits its stride for VR to grab all the headlines. Nonetheless, with both Rift and Vive on the market and PSVR building up to an autumn launch, VR is still likely to be the focus of a huge amount of attention and excitement at and around E3.
Part of that is because everyone is still waiting to see exactly what VR is going to be. We know the broad parameters of what the hardware is and what it can do – the earliest of early adopters even have their hands on it already – but the kind of experiences it will enable, the audiences it will reach and the way it will change the market are still totally unknown. The heightened interest in VR isn’t just because it’s exciting in its own right; it’s because it’s unknown, and because we all want to see the flashes of inspiration that will come to define the space.
One undercurrent to look out for at E3 is one that the most devoted fans of VR will be deeply unhappy with, but one which has been growing in strength and confidence in recent months. There’s a strong view among quite a few people in the industry (both in games and in the broader tech sector) that VR isn’t going to be an important sector in its own right. Rather, its importance will be as a stepping stone to the real holy grail – Augmented or Mixed Reality (AR / MR), a technology that’s a couple of years further down the line but which will, in this vision of the future, finally reach the mainstream consumer audience that VR will never attain.
The two technologies are related but, in practical usage, very different. VR removes the user from the physical world and immerses them entirely in a virtual world, taking over their visual senses entirely with closed, opaque goggles. AR, on the other hand, projects additional visual information onto transparent goggles or glasses; the user still sees the real world around them, but an AR headset adds an extra, virtual layer, ranging from something as simple as a heads-up display (Google’s ill-fated Glass was a somewhat clunky attempt at this) to something as complex as 3D objects that fit seamlessly into your reality, interacting realistically with the real objects in your field of vision. Secretive AR headset firm Magic Leap, which has raised $1.4 billion in funding but remains tight-lipped about its plans, prefers to divide the AR space into Augmented Reality (adding informational labels or heads-up display information to your vision) and Mixed Reality (which adds 3D objects that sit seamlessly alongside real objects in your environment).
The argument I’m hearing increasingly often is that while VR is exciting and interesting, it’s much too limited to ever be a mainstream consumer product – but the technology it has enabled and advanced is going to feed into the much bigger and more important AR revolution, which will change how we all interact with the world. It’s not what those who have committed huge resources to VR necessarily want to hear, but it’s a compelling argument, and one that’s worthy of consideration as we approach another week of VR hype.
The reasoning has two basis. The first is that VR isn’t going to become a mainstream consumer product any time soon, a conclusion based off a number of well-worn arguments that will be familiar to anyone who’s followed the VR resurgence and which have yet to receive a convincing rebuttal – other than an optimistic “wait and see”. The first is that VR simply doesn’t work well enough for a large enough proportion of the population for it to become a mainstream technology. Even with great frame-rate and lag-free movement tracking, some aspects of VR simply make it induce nausea and dizziness for a decent proportion of people. One theory is that it’s down to the fact that VR only emulates stereoscopic depth perception, i.e. the difference in the image perceived by each eye, and can’t emulate focal depth perception, i.e. the physical focusing of your eye on objects different distances from you; for some people the disparity between those two focusing mechanisms isn’t a problem, while for others, it makes them feel extremely sick.
Another theory is that it’s down to a proportion of the population getting nauseous from physical acceleration and movement not matching up with visual input, rather like getting motion sick in a car or bus. In fact, both of those things probably play a role; either way, the result is that a sizeable minority of people feel ill almost instantly when using VR headsets, and a rather more sizeable number feel dizzy and unwell after playing for extended periods of time. We won’t know just how sizeable the latter minority is until more people actually get a chance to play VR for extended periods; it’s worth bearing in mind once again that the actual VR experiences most people have had to date have been extremely short demos, on the order of 3 to 5 minutes long.
The second issue is simply a social one. VR is intrinsically designed around blocking out the world around you, and that limits the contexts in which it can be used. Being absorbed in a videogame while still aware of the world and the people around you is one thing; actually blocking out that world and those people is a fairly big step. In some contexts it simply won’t work at all; for others, we’re just going to have to wait and see how many consumers are actually willing to take that step on a regular basis, and your take on whether it’ll become a widespread, mainstream behaviour or not really is down to your optimism about the technology.
With AR, though, both of these problems are solved to some extent. You’re still viewing the real world, just with extra information in it, which ought to make the system far more usable even for those who experience motion sickness or nausea from VR (though I do wonder what happens regarding focal distance when some objects appear to be at a certain position in your visual field, yet exist at an entirely different focal distance from your eyes; perhaps that’s part of what Magic Leap’s secretive technology solves). Moreover, you’re not removed from the world any more than you would be when using a smartphone – you can still see and interact with the people and objects around you, while also interacting with virtual information. It may look a little bit odd in some situations, since you’ll be interacting with and looking at objects that don’t exist for other people, but that’s a far easier awkwardness to overcome than actually blocking off the entire physical world.
What’s perhaps more important than this, though, is what AR enables. VR lets us move into virtual worlds, sure; but AR will allow us to overlay vast amounts of data and virtual objects onto the real world, the world that actually matters and in which we actually live. One can think of AR as finally allowing the huge amounts of data we work with each day to break free of the confines of the screens in which they are presently trapped; both adding virtual objects to our environments, and tagging physical objects with virtual data, is a logical and perhaps inevitable evolution of the way we now work with data and communications.
While the first AR headsets will undoubtedly be a bit clunky (the narrow field of view of Microsoft’s Hololens effort being a rather off-putting example), the evolutionary path towards smaller, sleeker and more functional headsets is clear – and once they pass a tipping point of functionality, the question of “VR or AR” will be moot. VR is, at best, a technology that you dip into for entertainment for an hour here and there; AR, at its full potential, is something as transformative as PCs or smartphones, fundamentally changing how pretty much everyone interacts with technology and information on a constant, hourly, daily basis.
Of course, it’s not a zero sum game – far from it. The success of AR will probably be very good for VR in the long term; but if we see VR now as a stepping stone to the greater goal of AR, then we can imagine a future for VR itself only as a niche within AR. AR stands to replace and re imagine much of the technology we use today; VR will be one thing that AR hardware is capable of, perhaps, but one that appeals only to a select audience within the broad, almost universal adoption of AR-like technologies.
This is the vision of the future that’s being articulated more and more often by those who work most closely with these technologies – and while it won’t (and shouldn’t) dampen enthusiasm for VR in the short term, it’s worth bearing in mind that VR isn’t the end-point of technological evolution. It may, in fact, just be the starting point for something much bigger and more revolutionary – something that will impact the games and tech industries in a way even more profound than the introduction of smartphones.
What is the point of E3? I ask not in a snarky tone, but one of genuine curiosity, tinged with concern. I’m simply not sure what exactly the show’s organizers, the ESA, think E3 is for any more. Over the years, what was once by far the largest date in the industry’s annual calendar has stuck out in various new directions as it sought to remain relevant, but it’s always ended up falling back to the path of least resistance – the familiar halls of the Los Angeles Convention Center, the habitual routine of allowing only those who can prove some industry affiliation to attend. For all that the show’s organizers regularly tout minor tweaks to the formula as earth-shattering innovation, E3 today is pretty much exactly the same beast as it was when I first attended 15 years ago – and by that point, the show’s format was already well-established.
There’s one major difference, though; E3 today is smaller. It now struggles to fill the convention center’s halls, and a while back ditched the Kentia Hall – which for years promised the discovery of unknown gems to anyone willing to sift through its morass of terrible ideas. Kentia refugees now fill gaps in the cavernous South Hall’s floor plan, elevated to sit alongside a roster of the industry’s greats that gets more meagre with each passing year. This year, attendees at E3 will find it hard not to notice a number of key absences. The loss of Konami’s once huge booth was inevitable given the company’s U-turn away from console publishing, but the decisions of EA and Activision to pull out of the show this year will be felt far more keenly.
Hence the question; what’s the point? Who, or what, is E3 actually meant to be for? It’s not for consumers, of course – they’re not allowed in, in theory, though the ESA has come up with various pointlessly convoluted ways of letting a handful of them in anyway. It’s for business, yet big players in the industry seem deeply dissatisfied with it. It’s not just EA and Activision, either; even the companies who are actually exhibiting on the show floor seem to have taken to viewing it as an addendum to the actually important part of the week, namely their live-broadcast press conferences. Once the realm only of platform holders, now every major publisher has their own – and if EA and Activision’s decision to go their own way entirely, leaving the E3 show floor, has no major negative consequences for them this year, you can be damned sure others will question the show’s cost-value next year.
The problem is that the world has changed and E3 has not. Once, it was the only truly global event on the calendar; back then, London had ECTS and Tokyo had TGS, but there was no question of them truly challenging E3’s dominance. The world was a very different place back then, though. It was a time before streaming high resolution video, a time before the Internet both made the world a much smaller place, and made the hyper-local all the more relevant. Today, E3 sits in a landscape of events most of which, bluntly, justify their existence far better than the ESA’s effort does. Huge local events in major markets around the world serve their audiences better than a remote event in LA; GamesCom in Germany and TGS in Tokyo remain the biggest of those, but there are also major events in other European, Asian and Latin American countries that balance serving the business community in their regions with putting on a huge show for local consumers.
In the United States, meanwhile, E3 finds itself assailed on two sides. The PAX events have become the region’s go-to consumer shows, and a flotilla of smaller shows cater well to specific business and social niches. GDC, meanwhile, has become the de facto place to do business and for the industry to engage in conversation and debate with itself. The margin in between those two for a “showcase show that’s not actually for consumers but sort-of lets some in and is a place for the industry to do business but also please spend a fortune on a gigantic impressive stand” is an increasingly narrow piece of ground to stand on, and E3 is quite distinctly losing its balance.
A big part of the reason for that is simply that E3 has an identity crisis. It wants to be a global show in the age of the local, in an age where “global” is accomplished by pointing a camera at a stage, not by flying people from around the world to sit in the audience. It wants to be a spectacle, and a place to do business, and ends up being dissatisfying at both; it wants to excite and intrigue consumers, but it doesn’t want to let them in. The half-measures attempted over the years to square these circles have done nothing to convince anyone that E3 knows how to stay relevant; slackening ties to allow more consumers into the show simply annoys people who are there for work, and annoys the huge audience of consumers who remain excluded. The proposed consumer showcase satellite event, too, will simply annoy companies who have to divide their attention, and annoy consumers who still feel like they’re not being let in to the “real thing”. Meanwhile the show itself feels more and more like the hole in the middle of a doughnut – all these huge conferences, showcases and events are arranged around E3’s dates, but people spend less and less time at the show proper, and with EA and Activision go two of the major reasons to do so. (It’s also hard not to note, though I can’t quantify it in figures, that more industry people each year seem to stay home and watch the conferences online rather than travelling to LA.)
The answer to E3’s problems has to be an update to its objectives; it has to be for the ESA to sit down with its membership (including those who have already effectively abandoned the show) and figure out what the point of the show is, and what it’s meant to accomplish. The E3 brand has enormous cachet and appeal among consumers; it’s hard to believe that there’s no demand for a massive showcase event at the LA Convention Center that actually threw its doors open to consumers, it’s simply a question of whether ESA members think that’s something they’d like to participate in. From a business perspective, I think they’d be mad not to; the week of E3, loaded with conferences and announcements, drives the industry’s most devoted fans wild, and getting a few hundred thousand of them to pass through a show floor on that week would be one of the most powerful drivers of early sentiment and word of mouth imaginable.
As for business; it’s not like there isn’t a tried, tested and long-standing model for combining business and consumer shows that doesn’t involve a half-baked compromise. Tons of shows around the world, in games and in other fields, open for a couple of trade days before throwing the doors open to consumers over the weekend. Other approaches may also be valid, but the point is that there’s a simple and much more satisfying answer than the daft, weak-kneed reforms the ESA has attempted (“let’s let exhibitors give show passes to a certain number of super-fan consumers” – really? Really?).
E3 week remains a big deal; E3 itself may be faltering and a bit unloved, but the week around it is pretty much the only truly global showcase the industry has created for itself. That week deserves to be served by a better core event, rather than inexorably moving towards being a ton of media events orbiting a show nobody can really be bothered with. The organizers at the ESA need to be brave, bold and aggressive with what they do with E3 in future – because just falling back on the comfortable old format is starting to show diminishing returns at an alarming rate.
Steam saved PC gaming. As retailers aggressively reduced the shelf space afforded to PC titles – blaming piracy, but equally motivated, no doubt, by the proliferation of MMO and other online titles which had little or no resale value – Valve took matters into its own hands and delivered on the long-empty promises of digital distribution. It was a bumpy ride at first, but the service Valve created ushered in a new and exciting era for games on the PC. Freed from the shackles of traditional publishing and retail, it’s become a thriving platform that teems with creativity and experimentation. Steam still isn’t all things to all people, but it saved PC gaming.
Sometimes, though, you look at Steam and wonder if PC gaming was worth saving. All too often, browsing through Steam to look for interesting things to try out leaves you feeling not so much that you want to close the application in disgust, but that you’d like to set the whole damned thing on fire. The reason isn’t usability, or bugginess, or anything like that – Steam has its issues, but by and large it’s a solid piece of technology – but rather the “community” that Valve has allowed to thrive on its platform. On a platform that aims to expose and promote great games from newcomers and relatively unknown indies, community feedback, reviews and recommendations are vital components, but a legacy of poor and deeply misguided decision making from Valve has meant that engaging with those aspects of Steam can all too often feel like swimming through hot sewerage.
The problem is this; Steam is almost entirely unmoderated, and Valve makes pretty much zero effort to reign in any behaviour on its platform that isn’t outright illegal. As a consequence, it’s open season for the worst behaviours and tactics of the Internet’s reactionary malcontents – the weapon of choice being brigading, whereby huge numbers of users from one of the Internet’s cesspits are sent to downvote, post terrible reviews or simply fill content pages with bile. Targets are chosen for daring to include content that doesn’t please the reactionary hordes, or for being made by a developer who once said a vaguely liberal thing on Twitter, or – of course – for being made by a woman, or for whatever other thing simply doesn’t please the trolls on any given day. The reviews on almost any game on Steam will often contain some pretty choice language and viewpoints, but hitting upon a game that’s been targeted for brigading is like running headlong into a wall of pure, frothing hatred.
Of course, Steam’s not the worst of it in most regards; the places that spawn these brigades in the first place, places like Reddit and 4chan, are far, far worse, and concoct many other malicious ways to hurt and harass their targets. That Steam permits this behaviour on an ongoing basis is, however, a huge problem – not least because Steam is a commercial platform, and provides harassers and trolls with an opportunity to directly damage the income of the developers they target.
It’s not that Valve doesn’t care about the quality of its platform. Just this week, it implemented a new feature allowing customers to see scores from recent reviews, rather than overall scores, so you can get a sense of how a game has changed since its original launch. It’s a good, pretty well considered feature. Yet its arrival really just highlights how little Valve seems to care that its storefront is being used as a tool by harassers, and filled up on a regular basis with vicious, abusive reviews and comments that no customer wants to be confronted with when browsing. Sure, traditional retail may have been hanging PC gaming out to dry all those years ago, but at least I’m reasonably sure that most traditional retail stores would have kicked out anyone who ran into their store and started screaming obscenities in the face of the first girl they saw.
“traditional retail may have been hanging PC gaming out to dry all those years ago, but at least I’m reasonably sure that most traditional retail stores would have kicked out anyone who ran into their store and started screaming obscenities in the face of the first girl they saw”
And look – I get that community moderation is hard. It’s really hard. Much harder than throwing in a quick algorithm to compute review scores from recent reviews only, which is why that got tackled first; but harassment and brigading isn’t a new problem on Steam, or on the Internet in general, and there are only so many times that you can claim to simply be picking low-hanging fruit before someone points out that you haven’t even brought a ladder to the orchard. You’re not even trying. You don’t even want to try. I stated earlier on that Steam ended up this way because of bad decision making down the years, and this is what I meant; there has never been a sense that Valve wants to tackle this problem. Rather, they’ve given the impression that they hope they can fix it with some clever engineering tweak, some genius little bit of code that’ll somehow balance the need for community feedback to expose good games against the need to stop harassers and trolls from treating the platform as a 24 hour public toilet.
That’s not how community moderation works. It’s a fundamental, obtuse misunderstanding of how any sort of system designed to manage, build and support a community works – from statecraft right on down to housemate meetings to discuss unwashed dishes. You need people; you need actual people doing actual moderation jobs, granted the training and the authority to step in and put the community back on the rails when it falls off. It’s hard, and it’s actually pretty expensive, and it takes a lot of care and attention – but it’s not impossible. Look at the progress Riot Games has made in turning around the community of League of Legends, which was formerly one of the most grossly toxic communities in gaming. It’s still by no means perfect, but Riot has shown that it cares, and that it’s willing to fight to improve things, and LoL is by far a better, more welcoming and more fun game for it. Some of that was achieved with tweaks to systems and protocols; but in the end, it takes a real, breathing, thinking human to counteract attempts by other humans to be unpleasant to one another, because if there’s one thing our species has demonstrated extraordinary affinity for over the centuries, it’s finding creative ways to skirt around rules in pursuit of being unpleasant to other people.
Riot’s done a good job of this because, I believe, Riot genuinely believes that it’s the right thing to do. Therein lies the rub; I don’t think Valve cares. It should care. It has a damn-near monopoly on PC game distribution through its storefront, and that gives it responsibilities – if it doesn’t like or want those responsibilities, that’s sad in and of itself, but I’m sure a quick dip in the swimming pools they’re filling with money from Steam might take the edge off the pain. It should also care, though, because there’s a hard limit on how much a business can grow if it permits abusive behaviour towards whole classes of customers or clients. Anyone making a game that tackles a tough subject, or aims at a non-traditional audience, or who is themselves a member of a minority group; well, they’d probably love to be on Steam, but they’re thinking twice about whether it’s a good move. That’s not conjecture – it’s something I hear almost every week from developers in that position, developers whose starry-eyed view of Steam from only a few years ago has been replaced with absolute trepidation or even outright rejection of the idea of exposing themselves to the storefront’s warped excuse for a “community”.
Today, that might just mean Steam is losing out on a few bucks here and there from creators and customers who have had enough of the toxic environment it permits; but markets diversify as they grow. Steam took over when retailers failed to serve customers with an appetite for PC games. What, then, will happen to Steam if new waves of customers – younger and more diverse – find that games and creators they like are treated abysmally by the service? Valve shouldn’t need a commercial incentive to fix this problem; they should fix it because it’s the right thing to do, because tacitly enabling and permitting abuse is really little better than engaging in harassment yourself. If that’s not enough, though, there absolutely is a commercial incentive too; Steam may be dominant, but it’s not the only option for either consumers or creators. There are far more sales to be lost from permitting abuse than from telling harassers they’re no longer welcome. Valve should give the latter a try.
Researchers at Oxford University think that virtual reality could soon be being used to treat psychological disorders such as paranoia.
In the British Journal of Psychiatry, which we get for the horoscope, the researchers explained who they stuck paranoid people into virtual social situations. Through interacting with the VR experience, subjects were able to safely experience situations that might otherwise have made them anxious. We would have thought that paranoid people would not even have put on the glasses, but apparently they did.
By the end of the day more than half of the 30 participants no longer suffered from severe paranoia. This positive impact carried through into real world situations, such as visiting a local shop.
Paranoia causes acute anxiety in social situations – after all they believe that everyone is out to get them. About two percent of the population suffer from paranoia which is sometimes connected to schizophrenia.
Treatment methods for anxiety often involve slowly introducing the source of anxiety in a way that allows the patient to learn that this event is safe rather than dangerous. The VR experiment, used a train ride and a lift scene taught subjects to relearn that they were really safe.
The VR simulation did not use very photo-realistic graphics, which raises another question about if realism is important to have a positive impact.
Recently, Sony Computer Entertainment filed a patent with the USPTO to integrate a camera into a wearer’s contact lens, complete with the imaging sensor as well as data storage and a wireless communication module. The technology, powered wirelessly and controlled by blinking, also offers the possibility of auto-focus, zooming and image stabilization.
Sony is the second to file a patent for integrating a wearable camera into a contact lens, after it was discovered that Samsung filed a patent in South Korea for a similar concept on April 5th. Sony’s patent is filed under the name “Contact Lens and Storage Medium” and is slated to become a full-fledged camera device, complete with a lens, main CPU, imaging sensor, storage area, and a wireless communication module. The camera unit also includes support for autofocus, zooming, and image stabilization.
This isn’t the first time we’ve seen wireless sensor technology integrated into a contact lens. In January 2014, Google announced its ambitions to create a glucose-level monitoring contact lens for the diagnosis and monitoring of blood sugar levels for diabetic patients. Google’s project integrates several miniscule sensors loaded with tens of thousands of transistors that measure glucose levels from a wearer’s tear drops, along with a low-power wireless transmitter to send results to other wearable devices along with smartphones and PCs.
More recently on April 7, it was discovered that Samsung could be working on mass-marketing a CMOS imaging sensor into a contact lens thanks to a new patent discovered by SamMobile and GalaxyClub.nl. The patent application, filed in South Korea, includes a display that projects images directly into a wearer’s field of view and includes a camera, an antenna, and several sensors for detecting movement and eye blinks.
Sony’s contact lens patent could be successor to its HMZ 3D displays
Rather than placing focus solely as a healthcare solution, Sony’s patent appears to become a more biologically integrated implementation of the company’s early head-mounted displays (HMDs) with wireless video streaming. The big difference this time, however, will be the inclusion of a camera lens and near-undetectable appearance, depending on how well Sony manages to camoflauge any chips and modules into its first-generation contact lens units.
In November 2011, Sony introduced its first-generation HMZ-T1 head mounted 3D display, complete with dual 1280x720p OLED displays, support for 5.1 channel surround via earbuds and signal input from an HDMI 1.4a cable. This model weighed 420g / 0.93lbs with a launch price of $799.
In October 2012, Sony introduced the second-generation HMZ-T2 follow up in Japan. This model reduced weight by nearly 20 percent (330g / 0.73lbs) and replaced earbuds with a dedicated 3.5mm headphone jack, complete with near-latency free wireless HD viewing (dual 1280x720p displays), 24p cinema picture support, and signal input via HDMI 1.4a cable.
In November 2013, Sony introduced the HMZ-T3W, the third-generation of its head mounted 3D viewer with near-latency free, wireless HD viewing (dual 1280x720p displays) with a 32-bit DAC delivering 7.1 channel audio (5Hz – 24KHz), and signal input via MHL cable and HDMI 1.4a. This device was not available in the United States and launched in Europe for a stunning £1,300 ($2,035) and is alternatively available as an import from Japan for $1090.
Will not come cheap
Based on the initial launch prices of Sony’s previous HMZ headsets ($799 and above) and the Google Glass launch price of $1499, and depending on the company’s target market, we might expect Sony’s first-generation contact lenses to be somewhere in between these two price points when they begin mass-production within the next couple years.
Acer’s boss Jason Chen says his company will not make its own VR devices and will focus on getting its gaming products to work with the existing VR platforms.
Eyebrows were raised when Acer released its new Predator series products which support virtual reality devices. The thought was that Acer might have a device of its own in the works. However Acer CEO Jason Chen said there were no plans and the goal was to get everythink working with the four current major VR platforms Oculus, HTC’s Vive, OSVR and StarVR.
He said that VR was still at a rather early stage and so far still has not yet had any killer apps or software. Although that never stopped the development of tablet which to this day has not got itself a killer app. But Chen said that its demand for high-performance hardware will be a good opportunity for Acer.
Acer is planning to add support for VR devices into all of its future Predator series products and some of its high-end PC products.
Chen told Digitimes that said Acer was investing in two robot projects, the home-care Jibo and the robot arm Kubi in the US, and the company internally has also been developing robot technologies and should achieve some results within two years. Acer’s robot products will target mainly the enterprise market.
Virtual reality is, without a doubt, the most exciting thing that’s going to happen to videogames in 2016 – but it’s becoming increasingly clear, in the cold light of day, that it’s only going to be providing thrills to a relatively limited number of consumers. Market research firm Superdata has downgraded its forecast for the size of the VR market this year once more, taking it from a dizzying $5.1 billion projection at the start of the year to a more reasonable sounding $2.9 billion; though I’d argue that even this figure is optimistic, assuming as it does supply-constrained purchases of 7.2 million VR headsets by American consumers alone in 2016.
Yes, supply-constrained; Superdata reckons that some 13 million Americans will want a VR headset this year, but only 7.2 million will ship, of which half will be Samsung’s Gear VR – which is an interesting gadget in some regards, but I can’t help but feel that its toy-like nature and the low-powered hardware which drives it isn’t quite what most proponents of VR have in mind for their revolution. Perhaps the limited selection of content consumers can access on Gear VR will whet their appetite for the real thing; pessimistically, though, there’s also every chance that it will queer the pitch entirely, with 3.5 million low-powered VR gadgets being a pretty likely source of negative word of mouth regarding nausea or headaches, for example.
This is a problem VR needs to tackle; for a great many consumers, without proactive moves from the industry, word of mouth is all they’re going to get regarding VR. It’s a transformative technology, when the experience is good – as it generally is on PSVR, Rift and Vive – but it’s not one you can explain easily in a video, or on a billboard, because the whole point is that it’s a new way of seeing 3D worlds that isn’t possible on existing screens. Worse, when you see someone else using a VR headset in a video or in real life, it just looks weird and a bit silly. The technology only starts to shine for most consumers when they either experience it, or speak to a friend evangelising it on the basis of their own experience; either way, it all comes down to experience.
That’s why it was interesting to hear GameStop talk up its role as a place where consumers can come and try out PlayStation VR headsets this year. That’s precisely what the technology needs; where at the moment, there are a handful of places you can go to try out VR, but it’s utterly insufficient. VR’s objective for 2016 isn’t just to get into the hands of a few million consumers – it’s to become desired, deeply desired, by tens of millions more. The only way that will happen is to create that army of evangelists by creating a large number of easily accessible opportunities to experience VR – and GameStop is right to position itself as the industry’s best chance of doing so in the USA. Pop-up VR booths in trendy spots might excite bloggers, but what this new sector needs in the latter half of 2016 is much more down to earth – it needs as many of America’s malls as possible to be places where shoppers can drop in and try out VR for themselves.
In a sense, what’s happening here is deeply ironic; after years of digital distribution and online shopping making retail all but irrelevant, to the point where it’s practically disappeared in some countries, the industry suddenly needs retail stores again – not to sell games, because those are, in truth, better sold online, but to sell hardware, to sell an experience. How exactly you structure a long-term business model around that – the games retailer as showroom – is something I’m honestly not sure about, but it’s something GameStop and its industry partners need to figure out, because what VR makes clear is that games do sometimes need a way to reach consumers physically, in the real world, and right now only games retail chains are positioned to do that.
This isn’t a one-time thing, either – we know that, because this has happened before, in the not-so-distant past. Nintendo’s Wii enjoyed an extraordinary sales trajectory from its first Christmas post-launch into its first full year on the market, not least because the company did a good job of putting demo units (mostly running Wii Sports, of course) into not only every games store in the world, but also into countless other popular shopping areas. It was nigh-on impossible, in the early months of the Wii, to go out shopping without encountering the brand, seeing people playing the games and having the opportunity to do so yourself – an enormously important thing for a device which, like VR, really needed to be experienced in person for its worth to become apparent. VR, if anything, magnifies that problem; at least with Wii Sports, observers could see people having fun with it. Observing someone using VR, as mentioned above, just looks daft and a bit uncomfortable.
GameStop has weathered the storm rather better than some of its peers in other countries. The United Kingdom has seen its games retail devastated; it’s all but impossible to actually walk into a specialist store and buy a game in many UK city centres, including London. Would a modern-day version of the Wii be able to thrive in an environment lacking these ready-made showrooms for its capabilities on every high street and in every shopping mall? Perhaps, but it would take enormous effort and investment; something that VR firms, especially Sony, are going to have to take very seriously as they plan how to get the broader public interested in their device, and how to break out beyond the early adopter market.
Much of the VR industry’s performance in 2016 is going to be measured in raw sales figures, which is a bit of a shame; Vive and Rift are enormously supply constrained and having fulfillment difficulties, and the numbers we’ve seen floating around for Sony’s intentions suggest that PSVR will also be supply constrained through Christmas. The VR industry – ignoring the slightly worrying, premature offshoot that is mobile VR – is going to sell every headset it can manufacture in 2016. If it doesn’t, then there’s a very serious problem, but every indication says that this year’s key limiter will be supply, not demand.
The real measurement of how VR has performed in 2016, then, should be something else – the purchasing intent and interest level of the rest of the population. If by the time the world is mumbling through the second line of Auld Lang Syne and welcoming in 2017, consumer awareness of VR is low and purchasing intent isn’t skyrocketing – or worse, if the media’s dominant narratives about the technology are all about vomiting and migraines – then the industry will have done itself a grievous disservice. This is the year of VR, but not for the vast majority of consumers – which means that the real task of VR firms in 2016 is to convince the world that a VR headset is something it simply must own in 2017.
Troubled camera brand GoPro is going for broke and getting into the emerging VR market.
The outfit has GoPro has announced a new channel dedicated to 360-degree or VR content, which it calls GoPro VR. It has also unveiled a new version of its HeroCast wireless streaming tool, LiveVR, that’s dedicated to VR content. It seems to think that this effort will bail it out of its financial woes.
Meanwhile it has been talking up its VR camera rig. This is a six-camera Omni VR which will cost $5,000 for a complete bundle which can create extreme 360-degree content. It is even offering a $1,500 discount for those who already have a stack of GoPro cameras.
Pre-orders for the Omni VR camera will be opening up today, which is when the GoPro VR platform will also be launching. Today will also see the launch of GoPro VR apps for iOS and Android. Much of GloPro’s VR work is based around Kolor Eyes which was a 360-degree software specialist GoPro acquired around this time last year.
We expect to see the rest of the VR product line-up at the NAB show that starts in Las Vegas later today.
Software giant Microsoft has moved to deny a daft internet rumor that it was responsible for the ongoing Oculus Rift supply issues.
Oculus Rift customers were kept in the dark about the delays following the 28 March release date. Oculus confirmed that a component shortage was to blame for the long delays in supplying its VR headset to those who had pre-ordered. Then a rumour started that the mysterious “missing component” was actually the Xbox One control pad.
The rumour got a fair bit of traffic among the IT press which did not check the facts and liked making Microsoft the villian for all its woes. A moment engaging brain would have knocked the rumor stone dead. The source of the rumor came from a Reddit post from a bloke who claimed to have an inside source who told him. In journalism this is called a “man you met down the pub” source. You get around it by naming the source or using the information to stand the story up.
Someone finally did the right thing and asked Redmond, they were promptly told that the rumor was totally false and if anyone had any question about Rift delays they should ask Oculus VR.
This morning Reddit marked the post as a “confirmed fake.” An Oculus customer support worker, whose identity was verified, also dismissed the claim.
“Totally fake, but super-entertaining,” he said. “Thanks for this! Keep the fanatic coming!”
Clearly who ever fabricated the leak did not know what a supply issue really is. It is when there is not enough bits ordered to make up the final machine. Sometimes it is caused by a batch of faulty components, but normally it is because someone did not order enough.
Oculus has assured customers that it is working to overcome its supply issues. “We’ve taken steps to address the component shortage, and we’ll continue shipping in higher volumes each week,” reads its statement.
“We’ve also increased our manufacturing capacity to allow us to deliver in higher quantities, faster. Many Rifts will ship less than four weeks from original estimates, and we hope to beat the new estimates we’ve provided.”
The dark satanic rumor mill has manufactured a hell on earth yarn which claims that the outfit which nearly killed off VR gaming with its “Virtual Boy” wants to get back into the industry.
More than 20 years ago Nintendo came up with its $179.95 Virtual Boy it was marketed as the first “portable” video game console capable of displaying “true 3D graphics.” It failed because it was too pricey, was not really portable and made users sick. It was pulled within a year and was cited as proof as to why VR was not ready yet.
Not surprisingly Nintendo didn’t want to go back to that AI place. Nintendo of America boss Reggie Fils-Aime even claimed it “just isn’t fun” enough. Now that appears to have changed and Nintendo saying it was “looking at VR” but wouldn’t be in a position to give more details any time soon.
Carnegie Mellon University professor and game designer Jesse Schell outlined his 40 predictions for VR and and Augmented Reality on the list was Schell’s belief that the Japanese company is already working on a headset, and that it could be the one which takes the industry in a new direction.
Schell feels that by 2022, most of the cash spent on VR will be related to portable, self-contained systems that are not dependant on other mobile tech (like Samsung’s Gear VR, which needs a Samsung smartphone to function) or require a PC or console, and are free from cables and wires which restriction movement and immersion.
Digitimes Research claims that BOE , Kunshan Visionox, Guangzhou New Vision Opto-Electronic are developing such panels and Kunshan Visionox should be ready with volume production in first-quarter 2017.
BOE, Kunshan Visionox and Guangzhou New Vision will focus on 9.5, 4.6- and 5.0-inch flexible AMOLED panels for the smartphone market. The three use PI (polyimide) substrates for flexible AMOLED panels. For TFT backplanes Kunshan Visionox uses LTPS while the other two adopt Oxide TFT.
It does not appear a simple move as the makers have experienced developing such panels economically. China-based Tianma Micro-electronics, China Star Optoelectronics Technology and Truly Opto-electronics have been developing non-flexible AMOLED panels so it looks like competition will be tighter for LG and Samsung next year.