Benchmarks for Valve’s Steam machines are out and it does not look like the Linux powered OS is stacking up well against Windows.
According to Ars Technica the SteamOS gaming comes with a significant performance hit on a number of benchmarks.
The OS was put through Geekbench 3 which has a Linux version. The magazine used some mid-to-late-2014 releases that had SteamOS ports suitable for tests including Middle-Earth: Shadow of Mordor and Metro: Last Light Redux.
Both were intensive 3D games with built-in benchmarking tools and a variety of quality sliders to play with (including six handy presets in Shadow of Mordor’s case).
On SteamOS both games had a sizable frame rate hit. We are talking about 21- to 58-percent fewer frames per second, depending on the graphical settings. On our hardware running Shadow of Mordor at Ultra settings and HD resolution, the OS change alone was the difference between a playable 34.5 fps average on Windows and a 14.6 fps mess on SteamOS.
You would think that Valve’s own games wouldn’t have this problem, but Portal, Team Fortress 2, and DOTA 2 all took massive frame rate dips on SteamOS compared to their Windows counterparts.
Left 4 Dead 2 showed comparable performance between the two operating systems but nothing like what Steam thought it would have a couple of years ago.
Samsung has sold a large LCD display operation in order to concentrate full time on OLED-based products.
A report in Business Korea says that the facility in Cheonan, South Chungcheong Province, has shut down its L5 line, the fifth generation of LCD displays, and begun selling the equipment to other manufacturers.
The age of the equipment meant it was only suitable for notebook and small monitor displays. With OLED now rolling out in phones such as the recent Samsung Galaxy S6 Edge, and big-screen TVs, it seems that the company has decided to make a break with the past.
The Korean manufacturer sold off its fourth generation production line to a Chinese company last year. A spokesman for Samsung Display confirmed: “The company shut down the L5 line last month and is seeking companies that are willing to acquire idle equipment.”
Although the equipment and the products it produces may seem outdated, there is still a huge market for this stuff in lower end electronics. Some analysts believe that there are tens of billions of Korean Won in any sale. Ten billion Won is about £5.6m, which doesn’t sound nearly as much but is still better than poke in the eye.
The Cheonan factory is likely to be converted to make OLED products, with talk of deals for AMOLED phone displays for Huawei and even an acceleration of its on-again-off-again Ernie and Bert relationship with Apple said to be at the heart of the decision to ramp up production.
Samsung still operates three LCD production lines, but analysts question if this is the beginning of a move to OLED production only, and if so, what effect that will have on the company as demand for cheaper LCD screens continues to grow, with production ramping up in China.
Samsung has lost market share in the end user market with recent Galaxy products failing to sell as well as their predecessors. As such these component deals are the lifeblood of the business, with a contract to produce high-end screens for Apple alone worth billions.
Oracle has launched a direct rival to the Amazon Web Services (AWS) public cloud with its own Elastic Compute Cloud.
The product was revealed amid a flurry of cloud-related product announcements, including five in the infrastructure-as-a-service (IaaS) space, at the OpenWorld show in San Francisco on Tuesday.
Oracle Elastic Compute Cloud adds to the Dedicated Compute service the firm launched last year. The latest service lets customers make use of elastic compute capabilities to run any workload in a shared cloud compute zone, a basic public cloud offering.
“Last year we had dedicated compute. You get a rack, it’s elastic but it’s dedicated to your needs,” said Thomas Kurian, president of Oracle Product Development (pictured below).
“We’ve now added in Elastic Compute, so you can just buy a certain number of cores and it runs four different operating systems: Oracle Linux, Red Hat, Ubuntu or Windows, and elastically scale that up and down.”
Oracle has yet to release pricing details for the Elastic Compute Cloud service, but chairman and CTO Larry Ellison said on Sunday that it will be charged at the equivalent or lower than AWS pricing. For the dedicated model, Ellison revealed on Tuesday at OpenWorld that firms will pay half the cost for Oracle Dedicated Compute of the equivalent AWS shared compute option.
It is not surprising that Oracle would like the opportunity to have a piece of the public cloud pie. AWS earned its owner $2.08bn in revenue in the quarter ending 30 September.
Kurian shared current use details for the Oracle Cloud as evidence of the success it has seen so far. The firm manages 1,000PB of cloud storage, and in September alone processed 34 billion transactions on its cloud. This was a result of the 35,000 companies signed up to the Oracle Cloud, which between them account for 30 million users logging in actively each day.
However, Oracle’s chances of knocking Amazon off its cloud-leader perch, or even making a slight dent in its share, seem low. The AWS revenue was only made possible by the fact that Amazon owns 30 percent of the cloud infrastructure service market, with second and third-ranked Microsoft and IBM lagging behind at 10 and seven percent respectively.
Google and Salesforce have managed to capture less than five percent each. Indeed, realising how competitive the market is and Amazon’s dominant position, HP has just left the public cloud market.
Despite Oracle going head to head with AWS in the public cloud space, Amazon has been attempting to attract Oracle customers to its own platform.
“AWS and Oracle are working together to offer enterprises a number of solutions for migrating and deploying their enterprise applications on the AWS cloud. Customers can launch entire enterprise software stacks from Oracle on the AWS cloud, and they can build enterprise-grade Oracle applications using database and middleware software from Oracle,” the web giant notes on its site.
Amazon describes EC2 as letting users “increase or decrease capacity within minutes, not hours or days. You can commission one, hundreds or even thousands of server instances simultaneously”, making Oracle Elastic Compute Cloud a direct competitor.
Oracle has also added a hierarchical storage option for its archive storage cloud service, aimed at automatically moving data that requires long-term retention such as corporate records, scientific archives and cultural preservation content.
Ellison noted that this archiving service is priced at a 10th of the cost of Amazon’s S3 offering.
Kurian explained of the archive system: “I’ve got data I need to put into the cloud but I don’t need a recovery time objective. So you get it very, very cheap”, adding that it costs $1/TB per month.
The firm also launched what Kurian dubbed as its “lorry service” for bulk data transfer. This will see Oracle ship a storage appliance to a customer’s site, where they can then do a huge data transfer directly onto that machine at a much quicker rate than streaming it to the cloud. The appliance is then sent back to Oracle via DHL or FedEx, Kurian explained, for Oracle to then do the transfer on-site to the cloud for storage.
“This is much faster if you’re moving a huge amount of data. One company is moving 250PB of data. To stream that amount of data to the cloud would take a very long time,” he said.
Bulk data transfer will be available from November, while the archive service is available now.
“You can go up to shop.oracle.com as a customer, enter a credit card and you can buy the service, all the PaaS services and the storage service. We’re adding compute over the next couple of weeks,” Kurian explained.
“You pay for it by credit card or an invoice if you’re a corporate customer and pay for it by hour or month, by processor or by per gigabyte per hour or month for storage.”
Oracle Container Cloud, meanwhile, lets firms run apps in Docker containers and deploy them in the Oracle Compute Cloud, supporting better automation of app implementations using technologies like Kubernetes.
Oracle also launched additional applications that sit in its cloud, including the Data Visualisation Cloud Service. This makes visual analytics accessible to general business users who do not have access to Hadoop systems or the data warehouse.
“All you need is a spreadsheet to load your data and a browser to do the analysis,” Kurian explained.
Several new big data cloud services are also aimed at letting users more easily prepare and analyse data using Hadoop as the data store, for example Big Data Preparation and Big Data Discovery.
“With Big Data Preparation you can move data into your data lake, you can enrich the data, prepare it, do data wrangling, cleanse it and store it in the data lake. Big Data Discovery lets a business user sit in front of Hadoop, and through a browser-based dashboarding environment search the environment, discover patterns in the data, do analysis and curate subsets of the data for other teams to look at. It’s an analytic environment and complete Hadoop stack,” Kurian said.
As the end of 2015 rapidly approaches (seriously, how on earth is it October already?), the picture of what we can expect from VR in 2016 is starting to look a little less fuzzy around the edges. There’s no question that next year is the Year of VR, at least in terms of mindshare. Right now it looks like no fewer than three consumer VR systems will be on the market during calendar 2016 – Oculus Rift, PlayStation VR and Valve / HTC Vive. They join Samsung’s already released Gear VR headset, although that device has hardly set the world on fire; it’s underwhelming at best and in truth, VR enthusiasts are all really waiting for one of the big three that will arrive next year.
Those fuzzy edges, though; they’re a concern, and as they come into sharper focus we’re starting to finally understand what the first year of VR is going to look like. In the past week or so, we’ve learned more about pricing for the devices – and for Microsoft’s approach, the similar but intriguingly different Hololens – and the aspect that’s brought into focus is simple; VR is going to be expensive. It’s going to be expensive enough to be very strictly limited to early adopters with a ton of disposable income. It’s quite likely going to be expensive enough that the market for software is going to struggle for the first couple of years at least, and that’s a worry.
Oculus Rift, we’ve learned, will cost “at least” $350. That’s just for the headset; you’ll also need a spectacularly powerful PC to play games in VR. No laptop will suffice, and you’re certainly out of luck with a Mac; even for many enthusiasts, the prospect of adding a major PC purchase or upgrade to a $350 headset is a hefty outlay for an early glimpse of the future. It’s likely (though as yet entirely unconfirmed) that Valve’s Vive headset will have a similar price tag and a similarly demanding minimum PC specification. The cheap end of the bunch is likely to be PlayStation VR – not because the headset will be cheap (Sony has confirmed that it is pricing it as a “platform” rather than a peripheral, suggesting a $300 or so price tag) but because the system you attach it to is a $350 PS4 rather than a much more expensive PC.
It is unreasonable, of course, to suggest that this means that people will be expected to pay upwards of $600 for Sony’s solution, or $1500 for the PC based solution. A great many people already own PS4s; quite a few own PCs capable of playing VR titles. For these people, the headset alone (and perhaps some software) is the cost of entry. That is still a pretty steep cost – enough to dissuade people with casual interest, certainly – but it’s tolerable for early adopters. The large installed base of PS4s, in particular, makes Sony’s offering interesting and could result in a market for PlayStation VR ramping up significantly faster than pessimistic forecasts suggest. On the PC side, things are a little more worrying – there’s the prospect of a standards war between Valve and Oculus, which won’t be good for consumers, and a question mark over how many enthusiasts actually own a PC powerful enough to run a VR headset reliably, though of course, the cost of PCs that can run VR will fall between now and the 2016 launch.
All the same, the crux of the matter remains that VR is going to be expensive enough – even the headsets alone – to make it into an early-adopter only market during its first year or so. It’s not just the cost, of course; the very nature of VR is going to make it into a slightly tough sell for anyone who isn’t a devoted enthusiast, and more than almost any other type of device, I think VR is going to need a pretty big public campaign to convince people to try it out and accept the concept. It’s one thing to wax lyrical about holodecks and sci-fi dreams; it’s quite another to actually get people to buy into the notion of donning a bulky headset that blocks you off from the world around you in the most anti-social way imaginable. If you’re reading a site like GamesIndustry.biz, you almost certainly get that concept innately; you may also be underestimating just how unattractive and even creepy it will seem to a large swathe of the population, and even to some of the gamer and enthusiast market VR hopes (needs!) to capture.
The multi, multi million dollar question remains, as it has been for some time – what about software? Again, Sony has something of an advantage in this area as it possesses very well regarded internal studios, superb developer relations and deep pockets; combined with its price and market penetration advantages, these ought to more than compensate for the difference in power between the PS4 and the PCs being used to power Rift and Vive, assuming (and it’s a big assumption) that the PS4′s solution actually works reliably and consistently with real games despite its lack of horsepower. The PC firms, on the other hand, need to rely on the excitement, goodwill and belief of developers and publishers to provide great games for VR in its early days. A handful of teams have devoted themselves to VR already and will no doubt do great things, but it’s a matter of some concern that a lot of industry people you talk to about PC VR today are still talking in terms of converting their existing titles to simply work in 3D VR; that will look cool, no doubt, but a conversion lacking the attention to controls, movement and interaction that’s required to make a VR world work will cause issues like motion sickness and straight-up disappointment to rear their ugly heads.
If VR is going to be priced as a system, not just a toy or a peripheral, then it needs to have software that people really, really want. Thus far, what we’ve seen are demos or half-hearted updates of old games. Even as we get close enough to consumer launches for real talk about pricing to begin, VR is still being sold off the back of science fiction dreams and long-held technological longings, not real games, real experiences, real-life usability. That desperately needs to change in the coming months.
At least Hololens, which this week revealed an eye-watering $3000 developer kit to ship early next year, has something of a roadmap in this regard; the device will no doubt be backed up by Microsoft’s own studios (an advantage it shares, perhaps to a lesser degree, with Sony) but more importantly, it’s a device not aimed solely at games, one which will in theory be able to build up a head of steam from sales to enterprise and research customers prior to making a splash in consumer markets with a more mature, less expensive proposition. I can’t help wondering why VR isn’t going down this road; why the headlong rush to get a consumer device on the market isn’t being tempered at least a little by a drive to use the obvious enterprise potential of VR to get the devices out into the wild, mature, established and affordable before pushing them towards consumers. I totally understand the enthusiasm that drives this; I just don’t entirely buy the business case.
At the very least, one would hope that if 2016 is the year of VR, it’s also the year in which we start to actually see VR in real-life applications beyond the gaming dens of monied enthusiasts. It’s a technology that’s perfectly suited to out-of-home situations; the architect who wants to give clients a walkthrough of a new building design; the museum that wants to show how a city looked in the past; the gaming arcade or entertainment venue that wants to give people an experience that most of them simply can’t have at home on their consoles. VR is something that a great many consumers will want to have access to given the right software, the right price point and crucially, the right experience and understanding of its potential. Getting the equipment into the hands of consumers at Tokyo Games Show or EGX is a start, but only a first step. If VR’s going to be a big part of the industry’s future, then come next year, VR needs to be everywhere; it needs to be unavoidable. It can’t keep running on dreams; virtual reality needs to take a step into reality.
Sony has pulled back the curtains on its virtual reality headset, giving it an official introduction to the wild and a real-life name.
That name is PlayStation VR, which is an obvious but uninspired choice. The name that the unit had earlier, Morpheus, which was probably a nod towards starts-great-but-ends-badly film series The Matrix, had a bit more glamour about it.
The firm has shown off the hardware to the local journalistic crowd at the Tokyo Game Show, and provided the general press with information, details and specifications.
PlayStation VR was first discussed in March 2014 when it had the cooler name. Since then the firm has been hard at work getting something ready to announce and sell, according to a post on the PlayStation blog.
A game show in Tokyo would seem the most likely place for such an announcement.
Sony said that the system is “unique”, apparently because of a special sound system, and makes the most of the Sony PS4 and its camera. The firm is expecting the device to have a big impact on PlayStation gamers and gaming.
“The name PlayStation VR not only directly expresses an entirely new experience from PlayStation that allows players to feel as if they are physically inside the virtual world of a game, but reflects our hopes that we want our users to feel a sense of familiarity as they enjoy this amazing experience,” said Masayasu Ito, EVP and division president of PlayStation product business.
“We will continue to refine the hardware from various aspects, while working alongside third-party developers and publishers and SCE Worldwide Studios, in order to bring content that delivers exciting experiences only made possible with VR.”
Specifications are available, but they relate to a prototype and are subject to change. Sony said that the system has a 100-degree field of view, a 5.7in OLED display, a 120Hz refresh rate, and a panel resolution of 960×RGB×1080 per eye.
This will not put it at the high end of the market, as the field of view is only 10 degrees greater than with Google Cardboard, and 10 degrees under that of Oculus Rift. Some rivals go as wide as 210 degrees.
And no, no release date or price have been mentioned. We predict that these will be 2016 and expensive.
As the 7th console generation was coming to an end several years ago, there was much pessimism regarding the impending launch of the 8th generation. Just as 7th generation software sales were starting to lag, mobile gaming exploded, and PC gaming experienced a renaissance. It was easy to think that the console players were going to be going elsewhere to find their gaming entertainment by the time the new consoles hit the scene. However, the 8th generation consoles have had a successful launch. In fact, the Sony and Microsoft consoles are as successful as ever.
A comparison of the year over year console software sales suggests that the 8th generation is performing better than the 7th generation – provided you exclude the Nintendo consoles. The following graph shows physical and digital software sales for years 1 through 3 of each generation for the Xbox and PlayStation platforms.
The annual numbers take into account the staggered launch cycle, so year 1 comprises different sales years for Xbox 360 and PS3. The data shows that the Sony and Microsoft platforms have outperformed their 7th generation counterparts, especially in the first two years of the cycle. The 8th generation outperforms the 7th generation even in an analysis that excludes DLC, which now accounts for an additional 5-10 percent of software sales.
However, the picture is far different if we include the Nintendo platforms. The graph below shows the same data, but now includes the Wii and Wii U in their respective launch years.
The data shows how much the “Wii bubble” contributed to the explosive growth in software sales in 2008, the year the Wii really took off as a family and party device. This data corroborates a broader theme EEDAR has seen across our research – new, shortened gaming experiences that have added diversity to the market, especially mobile, have cannibalized the casual console market, not the core console market. People will find the best platform to play a specific experience, and for many types of experiences, that is still a sofa, controller, and 50 inch flat-screen TV.
The shift in consoles to core games is further exemplified by an analysis of sales by genre in the 7th vs. 8th generation. The graph below shows the percentage of sales by genre in 2007 versus 2014, ordered from more casual genres to more core genres. Casual genres like General Entertainment and Music over-indexed in 2007 while core genres like Action and Shooter over-indexed in 2014.
It has become trendy to call this console generation the last console generation. EEDAR believes one needs to be very specific when making these claims. While this might be the last generation with a disc delivery and a hard drive in your living room, EEDAR does not believe the living room, sit-down experience is going away any time soon.
It is possible that one day we will report on which companies made it through the night without being hacked or without exposing their users.
For now, though, the opposite is the norm and today we are reporting about a problem with gaming system Steam that, you guessed it, has dangled the personal details of punters within the reach of ne’er-do-wells.
The news is not coming out of Steam, or parent Valve, directly, but it is running rampant across social networks and the gaming community. The problem, according to reports and videos, was a bad one and made the overtaking of user accounts rather a simple job.
No badass end-of-level boss to beat here, just a stage in the authentication process. A video posted online demonstrates the efforts required, while some reports – with access to Steam’s PR hot air machine – say that the problem is fixed.
A statement released to gaming almanac Kotaku finds the firm in apologetic clean-up mode.
Steam told the paper that some users would have their passwords reset, those being the ones who might have seen their log-in changed under suspicious circumstances, and that in general users should already be protected from the risks at hand.
“To protect users, we are resetting passwords on accounts with suspicious password changes during that period or may have otherwise been affected,” the firm said.
“Relevant users will receive an email with a new password. Once that email is received, it is recommended that users log-in to their account via the Steam client and set a new password.
“Please note that, while an account password was potentially modified during this period, the password itself was not revealed. Also, if Steam Guard was enabled, the account was protected from unauthorized log-ins even if the password was modified.”
The firm added its apologies to the community.
The maker of chips and bits, Intel is doing better than the cocaine nose jobs of Wall Street expected.
Intel issued its quarterly results today and saw growth in its data centers and Internet-of-Things businesses offset weak demand for personal computers that use the company’s chips.
Intel said that it was expanding its line-up of higher-margin chips used in data centers to counter slowing demand from the PC industry. Its cunning plan to buy Altera for $16.7 billion in April was all about trying to do this.
Revenue from the data centres grew 9.7 percent to $3.85 billion in the second quarter from a year earlier, helped by cloud services companies and demand for data analytics.
Chief Financial Officer Stacy Smith was predicting robust growth rates of the data center group, Internet of Things group and NAND businesses.
Revenue from the PC business, which is still Intel’s largest, fell 13.5 percent to $7.54 billion in the quarter ended June 27.
However there was more doom about the PC market which Smith said was going to be weaker than previously expected.
Research firm Gartner thinks global PC shipments will fall 4.5 percent to 300 million units in 2015, and life is going to be pretty pants until 2016.
Intel forecast current-quarter revenue of $14.3 billion, plus or minus $500 million. Wall Street predicted a revenue of $14.08 billion.
The company’s net income fell to $2.71 billion from $2.80 billion a year earlier.
Net revenue fell 4.6 percent to $13.19 billion, but edged past the average analyst estimate of $13.04 billion. Intel’s stock fell about 18 percent this year.
The launch of Sony’s PS4 and Microsoft’s Xbox One consoles in China hasn’t attracted much fanfare, perhaps because both firms were aware from the outset of what an uphill struggle this would be, and how much potential for disappointment there was if expectations were set too high. Last week saw the first stab at estimating figures, from market intelligence firm Niko Partners, who reckon that the two platforms combined will sell a little over half a million units this year; not bad, but a tiny drop in the ocean that is China’s market for videogames.
These are not confirmed sales figures, it’s important to note; market intelligence firms essentially make educated guesses, and some of those guesses are a damn sight more educated than others, so treating anything they publish as hard data is ill-advisable. Nonetheless, the basic conclusion of Niko Partners’ report is straightforward and seems to have invited no argument; the newly launched game consoles are making little impact on the Chinese market.
There are lots of reasons why this is happening. For a start, far from being starved of a much desired product, the limited pre-existing market for game consoles in China is actually somewhat saturated; the country is host to a thriving grey import market for systems from Hong Kong, Taiwan and Japan. This market hasn’t gone away with the official launch of the consoles, not least because the software made officially available in China is extremely limited. Anyone interested in console gaming will be importing games on the grey market anyway, which makes it more likely that they’ll acquire their console through the same means.
Moreover, there’s a big cultural difference to overcome. Game consoles are actually a pretty tough sell, especially to families, in countries where they’re not already well-established. Their continued strength in western markets is largely down to the present generation of parents being accustomed to game consoles in the home; cast your mind back to the 1980s and 1990s in those markets, though, and you may recall that rather a lot of parents were suspicious of game consoles not just because of tabloid fury over violent content, but because these machines were essentially computers shorn of all “educational” value. I didn’t own a console until I bought a PlayStation, because my parents – otherwise very keen for us to use and learn about computers, resulting in a parade of devices marching through the house, starting from the Amstrad CPC and ending up with a Gateway 2000 PC in which I surreptitiously installed a Voodoo 3D graphics board – wouldn’t countenance having a SNES in the house. That’s precisely the situation consoles in China now face with much of their target audience; a situation amplified even further by the extremely high-pressure nature of Chinese secondary education, which probably makes parents even more reluctant than mine when it comes to installing potentially time-sucking entertainment devices in their homes.
Besides; Chinese people, teens and adults alike, already play lots of games. PC games are enormously popular there; mobile games are absolutely huge. This isn’t virgin territory for videogames, it’s an extremely developed, high-value, complex market, and an expensive new piece of hardware needs to justify its existence in very compelling terms. Not least due to local content restrictions, neither PS4 nor Xbox One is doing that, nor are they particularly likely to do so in the future; the sheer amount of content and momentum that would be needed to make an impression upon such a mature landscape is likely to be beyond the scope of all but a truly herculean effort at local engagement and local development by either company – not just with games, but also with a unique local range of services and products beyond gaming – and neither is truly in a position to make that effort. It’s altogether more likely that both Sony and Microsoft will simply sell into China to satisfy pre-existing local demand as much as possible, without creating or fulfilling any expectations higher than that.
Is this important? Well, it’s important in so much as China is the largest marketplace in the world, with a fast-growing middle class whose appetite for luxury electronics is well-established. Apple makes increasingly large swathes of its revenue in China; companies with high-end gaming hardware would like to do something similar, were the barriers to success not raised so high. Without building a market in China, the global growth potential of the console business is fairly severely limited – the established rich nations in which consoles are presently successful have a pretty high rate of market penetration as it is, and growing sales there is only going to get tougher as birth-rates fall off (a major factor in Japan already, but most European and North American states are within spitting distance of the Japanese figures, which is worth bearing in mind next time someone shares some moronic clickbait about sexless Japan on your Facebook feed). So yes, the failure of consoles to engage strongly in China would be a big deal.
The deal looks even bigger, though, if you view China as something of a bellwether. It’s a unique country in many regards – regulations, media environment, culture, sheer scale – but in other regards, it’s on a developmental track that’s not so different from many other nations who are also seeing the rise of an increasingly monied urban middle class. If the primary difficulty in China is regulations and content restrictions, then perhaps Sony and Microsoft will find more luck in Brazil, in India, in Indonesia, in the Philippines and in the many other nations whose rapid development is creating larger and larger audiences with disposable income for entertainment. In that case, China may be the outlier, the one nation where special conditions deny consoles a chance at market success.
If the problem with China is more fundamental, though, it spells trouble on the road. If the issue is that developing nations are adopting other gaming platforms and systems long before consoles become viable for launch there, creating a huge degree of inertia which no console firm has the financial or cultural clout to overcome, then the chances are that consoles are never going to take root in any significant degree in the new middle class economies of the world. Games will be there, of course; mobile games, PC games, games on devices that haven’t even been invented yet (though honestly, Niko Partners’ tip of SmartTV games as a growth market is one that I simply can’t view from any angle that doesn’t demand instant incredulity; still, who knows?). Consoles, though, would then find themselves restricted geographically to the markets in which they already hold sway, which creates a really big limit on future growth.
That’s not the end of the world. The wealthy nations which consume consoles right now aren’t likely to go anywhere overnight, and the chances are that they’ll continue to sustain a console audience of many tens of millions – perhaps well over 100 million – for years if not decades to come. Moreover, the future of games is inevitably more fragmented than its present; different cultures, different contexts and different tastes will mean that it will be a truly rare game which is played and enjoyed to a large degree in all quadrants of the globe. There’ll still be a market for a game which “just” does great business in North America, Europe and so on; but it’ll be an increasingly small part of an ever-growing market, and its own potential for growth will be minimal. That, in the end, is a fairly hard cap on console development costs – you can’t spend vastly more money making something unless your audience either gets bigger, or more willing to pay, and there’s little evidence of either of those things in the console world right now.
The real figures from China, if and when they’re finally announced, will be interesting to see – but it’s unlikely that Niko Partners’ projections are terribly far from the truth. Whether any console company truly decides to put their weight behind a push in China, or in another developing country, over the coming years may be a deciding factor in the role consoles will play in the future of the industry as a whole.
Amazon has looked at the gaming market and felt that it is an area it can make a pile of dosh.
So far its games have been restricted to mobile devices. But it looks like that’s about to change: Amazon Game Studios is currently hiring for what it describes as an “ambitious new PC game project using the latest technology.”
It looks like this will be Amazon’s first ever PC release. Amazon hired notable developers like Kim Swift, designer of Portal, as well as Clint Hocking, who previously worked on franchises like Far Cry and Splinter Cell.
It has spent a small fortune licensing the CryEngine, the same one used to make high-end PC games like Crysis 3 and bought the game streaming service Twitch last August for $970 million, and made gaming a big focus for its Fire TV media box.
In a statement Amazon said: “We believe that games have just scratched the surface in their power to unite players,” the job posting reads, “and will produce some of the future’s most influential voices in media and art.”
Valve is no stranger to its ventures having a somewhat rocky start. Remember when the now-beloved Steam first appeared, all those years ago? Everyone absolutely loathed it; it only ever really got off the ground because you needed to install it if you wanted to play Half-Life 2. It’s hard now to imagine what the PC games market would look like if Valve hadn’t persisted with their idea; there was never any guarantee that a dominant digital distribution platform would appear, and it’s entirely plausible that a messy collection of publisher-owned storefronts would instead loom over the landscape, with the indie and small developer games that have so benefited from Steam’s independence being squeezed like grass between paving stones.
That isn’t to say that Valve always get things right; most of the criticisms leveled at Steam in those early days weren’t just Luddite complaints, but were indeed things that needed to be fixed before the system could go on to be a world-beater. Similarly, there have been huge problems that needed ironing out with Valve’s other large feature launches over the years, with Steam Greenlight being a good example of a fantastic idea that has needed (and still needs) a lot of tweaking before the balance between creators and consumers is effectively achieved.
You know where this is leading. Steam Workshop, the longstanding program allowing people to create mods (or other user-generated content) for games on Steam, opened up the possibility of charging for Skyrim mods earlier this month. It’s been a bit of a disaster, to the extent that Valve and Skyrim publisher Bethesda ended up shutting down the service after, as Gabe Newell succinctly phrased it, “pissing off the Internet”.
There were two major camps of those who complained about the paid mods system for Skyrim; those who objected to the botched implementation (there were cases of people who didn’t own the rights to mod content putting it up for sale, of daft pricing, and a questionable revenue model that awarded only 25% to the creators), and those who object in principle to the very concept of charging for mods. The latter argument, the more purist of the two, sees mods as a labour of love that should be shared freely with “the community”, and objects to the intrusion of commerce, of revenue shares and of “greedy” publishers and storefronts into this traditionally fan-dominated area. Those who support that point of view have, understandably, been celebrating the forced retreat of Valve and Bethesda.
Their celebrations will be short-lived. Valve’s retreat is a tactical move, not a strategic one; the intention absolutely remains to extend the commercial model across Steam Workshop generally. Valve acknowledges that the Skyrim modding community, which is pretty well established (you’ve been able to release Steam Workshop content for Skyrim since 2012), was the wrong place to roll out new commercial features – you can’t take a content creating community that’s been doing things for free for three years, suddenly introduce experimental and very rough payment systems, and not expect a hell of a backlash. The retreat from the Skyrim experiment was inevitable, with hindsight. With foresight, the adoption of paid mods more broadly is equally inevitable.
Why? Why must an area which has thrived for so long without being a commercial field suddenly start being about money? There are a few reasons for the inevitability of this change – and, indeed, for its desirability – but it’s worth saying from the outset that it’s pretty unlikely that the introduction of commercial models is going to impact upon the vast majority of mod content. The vast majority of mods will continue to be made and distributed for free, for the same reasons as previously; because the creator loves the game in question and wants to play around with its systems; because a budding developer wants a sandbox in which to learn and show off their skills to potential employers; because making things is fun. Most mods will remain small-scale and will, simply, not be of commercial value; a few creators will chance their arm by sticking a price tag on such things, but the market will quickly dispose of such behaviour.
Some mods, though, are much more involved and in-depth; to realise their potential, they impact materially and financially upon the working and personal lives of their creators. For that small slice out of the top of the mod world, the introduction of commercial options will give creators the possibility of justifying their work and focus financially. It won’t make a difference at all to very many, but to the few talented creative people who will be impacted, the change to their lives could be immense.
This is, after all, not a new rule that’s being introduced, but an old, restrictive one that’s being lifted. Up until now, it’s effectively been impossible to make money from the majority of mods. They rely upon someone else’s commercial, copyrighted content; while not outright impossible technically, the task of building a mod that’s sufficiently unencumbered with stuff you don’t own for it to be sold legally is daunting at best. As such, the rule up until now has been – you have to give away your mod for free. The rule that we’ll gradually see introduced over the coming years will be – you can still give away your mod for free, but if it’s good enough to be paid for, you can put a price tag on it and split the revenue with the creator of the game.
That’s not a bad deal. The percentages certainly need tweaking; I’ve seen some not unreasonable defences of the 25% share which Bethesda offered to mod creators, but with 30% being the standard share taken by stores and other “involved but not active” parties in digital distribution deals, I expect that something like 30% for Steam, 30% for the publisher and 40% for the mod creator will end up being the standard. Price points will need to be thrashed out, and the market will undoubtedly be brutal to those who overstep the mark. There’s a deeply thorny discussion about the role of F2P to be had somewhere down the line. Overall, though, it’s a reasonable and helpful freedom to introduce to the market.
It’s also one which PC game developers are thirsting for. Supporting mod communities is something they’ve always done, on the understanding that a healthy mod scene supports sales of the game itself and that this should be reward enough. By and large, this will remain the rationale; but the market is changing, and the rising development costs of the sort of big, AAA games that attract modding communities are no longer being matched by the swelling of the audience. Margins are being squeezed and new revenue streams are essential if AAA games are going to continue to be sustainable. It won’t solve the problems by itself, or overnight; but for some games, creating a healthy after-market in user-generated content, with the developer taking a slice off the top of the economy that develops, could be enough to secure the developer’s future.
Hence the inevitability. Developers need the possibility of an extra revenue stream (preferably without having to compromise the design of their games). A small group of “elite” mod creators need the possibility of supporting themselves through their work, especially as the one-time goal of a studio job at a developer has lost its lustre as the Holy Grail of a modder’s work. The vast majority of gamers will be pretty happy to pay a little money to support the work of someone creating content they love, just as it’s transpired that most music, film and book fans are perfectly happy to pay a reasonable amount of money for content they love when they’re given flexible opportunities to do so.
Paid mods are coming, then; not to Skyrim and probably not to any other game that’s already got an established and thriving mod community, but certainly to future games with ambitions of being the next modding platform. Valve and its partners will have to learn fast to avoid “pissing off the Internet” again; but for those whose vehement arguments are based on the non-commercial “purity” of this corner of the gaming world, enjoy it while it lasts; the reprieve won this week is a temporary one.
It’s going to be another big year for games, as Newzoo is projecting that 2015 will see global gaming revenues jump 9.4 percent year-over-year to $91.5 billion. The future looks bright as well, with the research firm’s upcoming Global Games Market Report projecting worldwide revenues to reach $107 billion in 2017.
As the overall market grows, the distribution of where that money is coming from will also shift. Newzoo’s projections for this year have a surging Chinese market narrowly overtaking the US as the single biggest revenue contributor, bringing in $22.2 billion (up 23 percent) compared to the American market’s $22 billion (up 3 percent). As far as regions go, Asia-Pacific is far and away the largest source of gaming revenue, accounting for $43.1 billion (up 15 percent). Latin America is the smallest of the four major markets with just $4 billion in revenues, but it is also growing the quickest, up 18 percent year-over-year.
The platforms on which people spend money gaming are also in flux. Tablet revenues are expected to be up 27 percent year-over-year to $9.4 billion, with smartphone and watch revenues jumping 21 percent to $20.6 billion. However, PCs are the most popular platform for games, bringing in $27.1 billion (up 8 percent) from standard titles and MMOs, while casual webgames will draw an additional $6.6 billion (up 2 percent). Newzoo grouped TV, consoles, and VR devices into their own category, projecting them to bring in $25.1 billion (up 2 percent) in game revenues. The only market segment not seeing growth at the moment is the dedicated handheld, which Newzoo expects to bring in $2.7 billion in revenue this year (down 16 percent).
While the firm’s grouping of VR and smartwatch revenues in other categories may be unusual, it said both segments are too small to report for now.
“Short- to medium-term VR revenues will be limited and largely cannibalize on current console and PC game spending as a share of game enthusiasts invest in the latest technology and richest experience that VR offers,” Newzoo said. “Smartwatches will be a success but not add significant ‘new’ revenues to the $20.6 billion spent on smartphones this year.”
EA is shuttering four high-profile free-to-play games, all of them allied to popular IP like Battlefield and FIFA.
Battlefield Heroes, Battlefield Play4Free, Need for Speed World and FIFA World will all continue for another 90 days, at which point they will be taken offline for good. Further development on the games has stopped already.
“In more than five years since most of these titles launched, how we play games has changed dramatically,” said Patrick Soderlund, EVP of EA Games, in a statement. “These were pioneering experiences, and we’re humbled that, over the years, so many of you joined us to enjoy the games and the community.”
In terms of EA’s growing interest in free-to-play models, the real pioneer among that group is Battlefield Heroes, which was pitched at “frustrated, restricted” gamers back in 2008. Need for Speed World and Battlefield Play4Free followed, launching over the second half of 2010.
By the start of 2012, EA was reporting a combined total of 25 million players across the six games in its “Play4Free” initiative, with Battlefield Heroes and Need for Speed World contributing 10 million players each.
However, FIFA World is by no means a forerunner. It only reaching open beta late in 2013, and so it is being shuttered after substantially less than two years of public availability. This wouldn’t imply a slow decline in interest, but a lack of interest in the first place.
That’s in stark contrast to FIFA Online, the free-to-play version of the game made specifically for markets in Asia. In 2012, EA’s Andrew Wilson claimed that FIFA Online was making $100 million a year in revenue. A year later, FIFA Online 3, the most recent iteration, was the leading online sports game in both traffic and revenue in Korea.
One thing is certain, take these four titles away from EA’s free-to-play games on Origin, and you’re left with only Command & Conquer: Tiberium Alliances and Star Wars: The Old Republic – in his statement, Soderlund stressed the latter’s “enthusiastic and growing” community, and reiterated EA’s commitment to providing new content.
The remainder of the company’s free-to-play catalog is composed of games like Outernauts, The Simpsons: Tapped Out and Bejeweled Blitz. Casual, social, call them what you will, but they are intended for a very different audience to Need for Speed World and Battlefield Play4Free, and that audience has just lost two-thirds of the games EA had made to satisfy its needs.
Moore’s Law will be more relevant in the 20 years to come than it was in the past 50 as the Internet of Things (IoT) creeps into our lives, Intel has predicted.
The chip maker is marking the upcoming 50th anniversary of Moore’s Law on 19 April by asserting that the best is yet to come, and that the law will become more relevant in the next two decades as everyday objects become smaller, smarter and connected.
Moore’s Law has long been touted as responsible for most of the advances in the digital age, from personal computers to supercomputers, despite Intel admitting in the past that it wasn’t enough.
Named after Gordon Moore, co-founder of Intel and Fairchild Semiconductor, Moore’s Law is the observation that the number of transistors in a dense integrated circuit will double approximately every two years.
Moore wrote a paper in 1965 describing a doubling every year in the number of components per integrated circuit. He revised the forecast in 1975, doubling the time to two years, and his prediction has proved accurate.
The law now is used in the semiconductor industry to guide long-term planning and to set targets for research and development.
Many digital electronic devices and manufacturing developments are strongly linked to Moore’s Law, whether it’s microprocessor prices, memory capacity or sensors, all improving at roughly the same rate.
More recently, Intel announced the development of 3D NAND memory, which the company said was guided by Moore’s Law.
Intel senior fellow Mark Bohr said on a recent press call that, while Moore’s Law has been going strong for 50 years, he doesn’t see it slowing down, adding that Moore himself didn’t realise it would hold true for 50 years. Rivals such as AMD have also had their doubts.
“[Moore] thought it would push electronics into new spaces but didn’t realise how profound this would be, for example, the coming of the internet,” said Bohr.
“If you’re 20-something [the law] might seem somewhat remote and irrelevant to you, but it will be more relevant in the next 20 years than it was in the past 50, and may even dwarf this importance.
“We can see about 10 years ahead, so our research group has identified some promising options [for 7nm and 5nm] not yet fully developed, but we think we can continue Moore’s Law for at least another 10 years.”
Intel believes that upcoming tech will be so commonplace that it won’t even be a ‘thing’ anymore. It will “disappear” into all the places we inhabit and into clothing, into ingestible devices that improve our health, for example, and “it will just become part of our surroundings” without us even noticing it.
“We are moving to the last squares in the chess board, shrinking tech and making it more power efficient meaning it can go into everything around us,” said Bohr.
The Intel fellow describes the law as a positive move forward, but he also believes that we need to have a hard think about where we want to place it once products become smart as they can become targets for digital attacks.
“Once you put intelligence in every object round you, the digital becomes physical. [For example] if your toaster becomes connected and gets a virus it’s an issue, but not so important as if your car does,” he said.
“We have to think how we secure these endpoints and make sure security and privacy are considered upfront and built into everything we deploy.”
Bohr explained that continuing Moore’s Law isn’t just a matter of making chips smaller, as the technology industry has continually to innovate device structures to ensure that it continues.
“Moore’s Law is exponential and you haven’t seen anything yet. The best is yet to come. I’m glad to hand off to the next generation entering the workforce; to create new exciting experiences, products and services to affect the lives of billions of people on the planet,” added Bohr.
“Moore’s Law is the North Star guiding Intel. It is the driving force for the people working at Intel to continue the path of Gordon’s vision, and will help enable emerging generations of inventors, entrepreneurs and leaders to re-imagine the future.”
During a presentation at the Game Developers Conference earlier this month, Boss Fight Entertainment’s Damion Schubert suggested the industry to drop the term “whales,” calling it disrespectful to the heavy spenders that make the free-to-play business model possible. As an alternative, he proposed calling them “patrons,” as their largesse allows the masses to enjoy these works that otherwise could not be made and maintained.
After his talk, Schubert spoke with GamesIndustry.biz about his own experiences with heavy spending customers. During his stint at BioWare Austin, Schubert was a lead designer on Star Wars: The Old Republic as it transitioned from its original subscription-based business model to a free-to-play format.
“I think the issue with whales is that most developers don’t actually psychologically get into the head of whales,” Schubert said. “And as a result, they don’t actually empathize with those players, because most developers aren’t the kind of person that would shell out $30,000 to get a cool speeder bike or whatnot… I think your average developer feels way more empathy for the free players and the light spenders than the whales because the whales are kind of exotic creatures if you think about them. They’re really unusual.”
Schubert said whales, at least those he saw on The Old Republic, don’t have uniform behavior patterns. They weren’t necessarily heavy raiders, or big into player-vs-player competition. They were just a different class of customer, with the only common attribute being that they apparently liked to spend money. Some free-to-play games have producers whose entire job is to try to understand those customers, Schubert said, setting up special message boards for that sub-community of player, or letting them vote on what content should be added to a game next.
“When you start working with these [customers], there’s a lot of concern that they are people who have gambling problems, or kids who have no idea of the concept of money,” Schubert said.
But from his experience on The Old Republic, Schubert came to understand that most of that heavy spending population is simply people who are legitimately rich and don’t have a problem with devoting money to something they see as a hobby. Schubert said The Old Republic team was particular mindful of free-to-play abuse, and had spending limits placed to protect people from credit card fraud or kids racking up unauthorized charges. If someone wanted to be a heavy spender on the game, they had to call up customer service and specifically ask for those limits to be removed.
“If you think about it, they wanted to spend money so much that they were willing to endure what was probably a really annoying customer service call so they could spend money,” Schubert said.
The Old Republic’s transition from a subscription-based model to free-to-play followed a wider shift in the massively multiplayer online genre. Schubert expects many of the traditional PC and console gaming genres like fighting games and first-person shooters to follow suit, one at a time. That said, free-to-play is not the business model of the future. Not the only one, at least.
“I think the only constant in the industry is change,” Schubert said when asked if the current free-to-play model will eventually fall out of favor. “So yeah, it will shift. And it will always shift because people find a more effective billing model. And the thing to keep in mind is that a more effective billing model will come from customers finding something they like better… I think there is always someone waiting in the wings with a new way of how you monetize it. But I do think that anything we’re going to see in the short term, at least, is probably going to start with a great free experience. It’s just so hard to catch fire; there are too many competitive options that are free right now.”
Two upstart business models Schubert is not yet sold on are crowdfunding and alpha-funding. As a consumer, he has reservations about both.
“The Wild West right now is the Kickstarter stuff, which is a whole bunch of companies that are making their best guess about what they can do,” Schubert said. “Many of them are doing it very, very poorly, because it turns out project management in games is something the big boys don’t do very well, much less these guys making their first game and trying to do it on a shoestring budget. I think that’s a place where there’s a lot more caveat emptor going on.”
Schubert’s golden rule for anyone thinking of supporting a Kickstarter is to only pledge an amount of money you would be OK losing forever with nothing to show for it.
“At the end of the day, you’re investing on a hope and a dream, and by definition, a lot of those are just going to fail or stall,” Schubert said. “Game development is by definition R&D. Every single game that gets developed is trying to find a core game loop, trying to find the magic, trying to find the thing that will make it stand out from the 100 other games that are in that same genre. And a lot of them fail. You’ve played 1,000 crappy games. Teams didn’t get out to make crappy games; they just got there and they couldn’t find the ‘there’ there.”
He wasn’t much kinder to the idea of charging people for games still in an early stage of development.
“I’m not a huge fan of Early Access, although ironically, I think the MMO genre invented it,” Schubert said. “But on the MMOs, we needed it because there are things on an MMO that you cannot test without a population. You cannot test a 40-man raid internally. You cannot test large-scale political systems. You cannot test login servers with real problems from different countries, server load and things like that. Early Access actually started in my opinion, with MMOs, with the brightest of hopes and completely and totally clean ideals.”
Schubert has funded a few projects in Early Access, but said he wound up getting unfinished games in return. Considering he works on unfinished games for a living, he doesn’t have much patience for them in his spare time, and has since refrained from supporting games in Early Access.
“I genuinely think there are very few people in either Kickstarter or Early Access that are trying to screw customers,” Schubert said. “I think people in both those spaces are doing it because they love games and want to be part of it, and it’s hard for me to find fault in that at the end of the day.”