Microsoft’s Xbox division is in a much healthier state today than it was a year ago. It’s had a tough time of it; forced to reinvent itself in an excruciating, public way as the original design philosophy and marketing message for the Xbox One transpired to be about as popular as breaking wind in a crowded lift, resulting in executive reshuffles and a tricky refocus of the variety that would ordinarily be carried out pre-launch and behind closed doors. Even now, Xbox One remains lumbered with the fossilised detritus of its abortive original vision; Kinect 2.0 has been shed, freeing up system resources and marking a clear departure for the console, but other legacy items like the expensive hardware required for HDMI input and TV processing are stuck right there in the system’s hardware and cannot be extracted until the inevitable redesign of the box rolls around.
All the same, under Phil Spencer’s tenure as Xbox boss, the console has achieved a better turnaround than any of us would have dared to expect – but that, perhaps, speaks to the low expectations everyone had. In truth, despite the sterling efforts of Spencer and his team, Xbox One is still a console in trouble. A great holiday sales season was widely reported, but actually only happened in one territory (the USA, home turf that was utterly dominated by Xbox in the previous generation), was largely predicated on a temporary price-cut and was somewhat marred by serious technical issues that dogged the console’s headline title for the season, the Master Chief Collection.
Since the start of 2015, things have settled down to a more familiar pattern once more; PS4 consistently outsells Xbox One, even in the USA, generally racking up more than double the sales of its competitor in global terms. Xbox One sells better month-on-month than the Wii U, but that’s cold comfort indeed given that Nintendo’s console is widely seen as an outright commercial failure, and Nintendo has all but confirmed that it will receive an early bath, with a replacement in the form of Nintendo NX set to be announced in 2016. Microsoft isn’t anywhere near that level of crisis, but nor are its sales in 2015 thus far outside the realms of comparison with Wii U – and their installed bases are nigh-on identical.
The odd thing about all of this, and the really positive thing that Microsoft and its collaborators like to focus on, is that while the Xbox One looks like it’s struggling, it’s actually doing markedly better than the Xbox 360 was at the same point in its lifespan – by my rough calculations, Xbox One is about 2.5 million units north of the installed base of Xbox 360 at the same point. Oddly, that makes it more comparable with PS3, which was, in spite of its controversy-dogged early years, a much faster seller out the door than Microsoft’s console. The point stands, though, that in simple commercial terms Xbox One is doing better than Xbox 360 did – it just happens that PS4 is doing better than any console has ever done, and casting a long shadow over Microsoft’s efforts in the process.
The problem with this is that I don’t think very many people are under the impression that Microsoft, whose primary businesses lie in the sale of office and enterprise software, cloud services and operating systems, is in the videogames business just in order to turn a little profit. Ever since the departure of Steve Ballmer and the appointment of the much more business-focused Satya Nadella as CEO, Xbox has looked increasingly out of place at Microsoft, especially as projects like Surface and Windows Phone have been de-emphasised. If Xbox still has an important role, it’s as the flag-bearer for Microsoft’s brand in the consumer space; but even at that, the “beach-head in the living room” is far less important now that Sony no longer really looks like a competitor to Microsoft, the two companies having streamlined themselves to a point where they don’t really focus on the same things any more. Besides, Xbox One is being left behind in PS4′s dust; even if Microsoft felt like it needed a beach-head in the living room, Xbox wouldn’t exactly be doing the job any more.
But wait, we’ve been here before, right? All those rumours about Microsoft talking to Amazon about unloading the Xbox division came to nothing only a few short months ago, after all. GDC saw all manner of talk about Xbox One’s place in the Windows 10 ecosystem; Spencer repeatedly mentioned the division having Nadella’s backing, and then there’s the recent acquisition of Minecraft, which surely seems like an odd thing to take place if the position of Xbox within the Microsoft family is still up in the air. Isn’t this all settled now?
Perhaps not, because the rumours just won’t stop swirling that Microsoft had quietly put Xbox on the market and is actively hunting for a buyer. During GDC and ever since, the question of who will come to own Xbox has been posed and speculated upon endlessly. The console’s interactions with Windows 10, including the eventual transition of its own internal OS to the Windows 10 kernel; the supposed backing of Nadella; the acquisition of Minecraft; none of these things have really deterred the talk that Microsoft doesn’t see Xbox as a core part of its business any more and would be happy to see it gone. The peculiar shake-up of the firm’s executive team recently, with Phil Harrison quietly departing and Kudo Tsunoda stepping up to share management of some of Microsoft Game Studios’ teams with Phil Spencer, has added fuel to the fire; if you hold it up at a certain angle to the light, this decision could look like it’s creating an internal dividing line that would make a possible divestment easier.
Could it happen? Well, yes, it could – if Microsoft is really determined to sell Xbox and can find a suitable bidder, it could all go far more smoothly than you may imagine. Xbox One would continue to be a part of the Windows 10 vision to some extent, and would probably get its upgrade to the Windows 10 kernel as well, but would no longer be Microsoft hardware – not an unfamiliar situation for a company whose existence has mostly been predicated on selling operating systems for other people’s hardware. Nobody would buy Xbox without getting Halo, Forza and various other titles into the bargain, but Microsoft’s newly rediscovered enthusiasm for Windows gaming would suggest a complex deal wherein certain franchises (probably including Minecraft) remain with Microsoft, while others went off with the Xbox division. HoloLens would remain a Microsoft project; it’s not an Xbox project right now and has never really been pushed as an Xbox One add-on, despite the immediate comparisons it prompted with Sony’s Morpheus. Xbox games would still keep working with the Azure cloud services (Microsoft will happily sell access to that to anyone, on any platform), on which framework Xbox Live would continue to operate. So yes, Xbox could be divorced from Microsoft, maintaining a close and amiable relationship with the requisite parts of the company while taking up residence in another firm’s stable – a firm with a business that’s much more in line with the objectives of Xbox than Microsoft now finds itself to be.
“None of Xbox’ rivals would be in the market to buy such a large division, and no game company would wish to lumber itself with a platform holder business. Neither Apple nor Google make the slightest sense as a new home for Xbox either”
This, I think, is the stumbling block. I’m actually quite convinced that Microsoft would like to sell the Xbox division and has held exploratory talks to that end; I’m somewhat less convinced, but prepared to believe, that those talks are continuing even now. However, I’m struggling to imagine a buyer. None of Xbox’ rivals would be in the market to buy such a large division, and no game company would wish to lumber itself with a platform holder business. Neither Apple nor Google make the slightest sense as a new home for Xbox either; the whole product is distinctly “un-Apple” in its ethos and approach, while Google is broadly wary of hardware and almost entirely disinterested in games.
Amazon was the previously mentioned suitor, and to my mind, remains the most likely purchaser – but it’s seemingly decided to pursue its own strategy for living room devices for now, albeit with quite limited success. I could see Amazon still “exploring options” in this regard with Microsoft, but if that deal was going to happen, I would have expected it to happen last year. Who else is out there, then? Netflix, perhaps, is an interesting outside possibility – the company’s branching out into creating original TV content as well as being a platform for third-party content would be a reasonably good cultural match for the Game Studios aspect of Xbox, but it’s hard to imagine a company that has worked so hard to divorce itself from the entire physical product market suddenly leaping back into it with a large, expensive piece of hardware.
This, I think, is what ultimately convinces me that Xbox is staying at Microsoft – for better or worse. It might be much better for Xbox if it was a centrepiece project for a company whose business objectives matched its strengths; but I don’t think any such company exists to take the division off Microsoft’s hands. Instead, Spencer and his talented team will have to fight to ensure that Xbox remains relevant and important within Microsoft. Building its recognition as a Windows 10 platform is a good start; figuring out other ways in which Xbox can continue to be a great game platform while also bringing value to the other things that Microsoft does is the next challenge. Having turned around public perception of the console to a remarkable degree, the next big task for the Xbox team will be to change perceptions within Microsoft itself and within the investor community – if Xbox is stuck at Microsoft for the long haul, it needs to carve itself a new niche within a business vision that isn’t really about the living room any more.
The HSA Foundation has issued a new standard which can match up graphics chips, processors and other hardware to boost things like video search.
The downside is that Intel and Nvidia to not appear to have been involved in the creation of the version 1.0 of its Heterogeneous System Architecture specification.
What the standard would mean is that compute, graphics and digital-signal processors will be able to directly address the same physical RAM in a more cache-coherent manner. It will mean the end of external buses and loosely linked interconnects, and allow data to be processed at the same time.
A GPU and CPU can work on the same bits of memory in an application in a multi-threaded way. The spec refers to GPUs and DSPs as “kernel agents” which sounds a bit like corporate spies for KFC.
The blueprints support 64-bit and 32-bit, and map out virtual memory, memory coherency, and message passing, programming models, and hardware requirements.
While the standard is backed by AMD, ARM, Imagination Technologies, MediaTek, Qualcomm, and Samsung, Intel and Nvidia are giving it a miss. The thought is that with these names onboard there should be a enough of a critical mass of developers who will build HSA-compliant games and tools.
Intel has announced details of its first Xeon system on chip (SoC) which will become the new the Xeon D 1500 processor family.
Although it is being touted as a server, storage and compute applications chip at the “network edge”, word on the street is that it could be under the bonnet of robots during the next apocalypse.
The Xeon D SoCs use the more useful bits of the E3 and Atom SoCs along with 14nm Broadwell core architecture. The Xeon D chip is expected to bring 3.4x better performance per watt than previous Xeon chips.
Lisa Spelman, Intel’s general manager for the Data Centre Products Group, lifted the kimono on the eight-core 2GHz Xeon D 1540 and the four-core 2.2GHz Xeon D 1520, both running at 45W. It also features integrated I/O and networking to slot into microservers and appliances for networking and storage, the firm said.
The chips are also being touted for industrial automation and may see life powering robots on factory floors. Since simple robots can run on basic, low-power processors, there’s no reason why faster chips can’t be plugged into advanced robots for more complex tasks, according to Intel.
While AMD FreeSync capable monitors are now available in select regions, AMD has given a short update regarding it saying that the FreeSync driver will be coming on March 19th.
According to AMD, FreeSync monitors are now available in some countries in EMEA (Europe, Middle East and Africa) region and since “gamers are excited to bring home an incredibly smooth and tearing-free PC gaming experience powered by AMD Radeon GPUs and AMD A-Series APUs”, AMD has announced that FreeSync-capable driver for single-GPU configurations will be available on March 19th. Unfortunately, those running on AMD Crossfire system will have to wait until April.
Plenty of manufacturers, including Acer, LG, BenQ, Iiyama anmd many more, have already announced their own FreeSync monitors so there will be plenty of choice when it comes to screen size and resolution.
In case you missed it earlier, FreeSync is AMD’s response to Nvidia G-Sync and syncs the refresh rate of the monitor with the rendering rate of the AMD Radeon GPU, thus removing screen tearing and reduce stuttering in games. We had a chance to check it out during CES 2015 and it looked pretty good.
Most manufacturers announced that their FreeSync monitors will be available during this month so finally we will have a chance to see AMD’s FreeSync push in retail/e-tail.
It looks like the Mantle API developed by AMD is slowly reaching its end of its useful life.
Mantle has apparently served its purpose as a bridge between DirectX 11 and DirectX 12 and AMD is starting to tell new developers to focus their attention on DirectX and GLnext.
Raja Koduri, the Vice President of Visual and Perceptual Computing at AMD said in a blog post:
The Mantle SDK also remains available to partners who register in this co-development and evaluation program. However, if you are a developer interested in Mantle “1.0″ functionality, we suggest that you focus your attention on DirectX® 12 or GLnextGLnext.
This doesn’t mean a quick death for Mantle. AMD suggest it will support its partners and that there are still titles to come with support for Mantle. Battlefield Hardline is one of them and it’s a big one.
Back in November AMD announced a Mantle update, telling the world that there are four engines and 20+ launched or upcoming titles, and 10 developers publically announced their support for Mantle.
There are close to 100 registered developers in the Mantle beta program. The Frostbite 3 engine (Battlefield Hardline), CryEngine (Crysis series), Nitrous Engine (Star Citizen) and Asura Engine (Sniper elite) currently have support for Mantle. Some top games including Thief and Sid Meir’s Civilization Beyond Earth also support Mantle.
AMD will tell developers a bit more about Mantle at the Game Developers Conference 15 that starts today in San Francisco and will talk more about its definitions of an open platform. The company will also tackle on new capabilities beyond draw calls and it will remain there for the people who are already part of the Mantle program.
However, AMD suggests new partners should look the other way and focus on alternatives. When we spoke with Raja and a few other people from AMD over the last few quarters, we learned that Mantle was never supposed to take on DirectX 12. You should look at Mantle as AMD’s wish list, that’s what AMD wanted and needed before Microsoft was ready to introduce DirectX 12. Mantle as a low-level rending API and keep in in mind that it came almost two years before DirectX 12.
The Battlefield 4 Mantle patch came in February 2014 roughly a year ago and it showed a significant performance increase on supported hardware. Battlefield Hardline is the next big game to support Mantle and it comes in two weeks. CryEngine also supports Mantle, but we will have to wait and see if the support will ever translate into an actual game with Mantle support.
Free to play has an image problem. It’s the most influential and arguably important development in the business of games in decades, a stratospherically successful innovation which has enabled the opening up of games to a wider audience than ever before. Implemented well, with clear understanding of its principles and proper respect afforded to players and creativity alike, it’s more fair and even, in a sense, democratic than old-fashioned models of up-front payment; in theory, players pay in proportion to their enjoyment, handing over money in small transactions for a continued or deepened relationship with a game they already love, rather than giving a large amount of cash up-front for a game they’ve only ever seen in (possibly doctored) screenshots and videos.
While that is a fair description, I think, of the potential of free-to-play, it’s quite clearly not the image that the business model bears right now. You probably scoffed about half a dozen times reading the above paragraph – it may be a fair description of free-to-play at its hypothetical best, but it’s almost certainly at odds with your perceptions.
How, then, might we describe the perception of F2P? Greedy, exploitative, unfair, cheating… Once these adjectives start rolling, it’s hard to get them to stop. The negative view of F2P is that it’s a series of cheap psychological tricks designed to get people to spend money compulsively without ever realising quite how much cash they’re wasting on what is ultimately a very shallow and cynical game experience.
I don’t think it’s entirely unsurprising or unexpected that this perception should be held by “core” gamers or those enamoured of existing styles of game. Although F2P has proven very successful for games like MMOs and MOBAs, it’s by no means universally applicable, either across game types or across audience types; some blundering attempts by publishers to add micro-transactions to premium console and PC titles, combined with deep misgivings over the complete domination of F2P in the mobile game market, have left plenty of more traditional gamers with a very negative and extremely defensive attitude regarding the new business model. That’s fine, though; F2P isn’t for that audience (though it’s a little more complex than that in reality; many players will happily tap away at an F2P mobile game while waiting for matchmaking in a premium console game).
What’s increasingly clear, however, is that there’s an image problem for F2P right in the midst of the audience at whom it’s actually aimed. The negative perception of F2P is becoming increasingly mainstream. It gets mass-media coverage on occasion; recently, it spurred Apple to create a promotion specifically pointing App Store customers to games with no in-app purchases. I happen to think that’s a great idea personally, but what does it say about the feedback from Apple’s customers regarding F2P games, that promotion of non-F2P titles was even a consideration?
Even some of the most successful F2P developers now seem to want to distance themselves from the business model; this week’s interview with Crossy Road developers Hipster Whale saw the team performing linguistic somersaults to avoid labelling their free-to-play game as being free-to-play. Crossy Road is a brilliant, fun, interesting F2P game that hits pretty much all of the positive notes I laid out up in the first paragraph; that even its own developers seem to view “free-to-play” as an overtly negative phrase is deeply concerning.
The problem is that the negativity has a fair basis; there’s a lot of absolute guff out there, with the App Store utterly teeming with F2P games that genuinely are exploitative and unfair; worst of all, the bad games tend to be stupid, mean-spirited and grasping, attempting to suck money out of easily tricked customers (and let’s be blunt here: we’re talking, in no small measure, about kids) rather than undertaking the harder but vastly more rewarding task of actually entertaining and enthralling people until they feel perfectly happy with parting with a little cash to see more, do more or just to deepen their connection to the game.
Such awfulness, though, is not universal by any measure. There are tons of good F2P games out there; games that are creative and interesting (albeit often within a template of sorts; F2P was quick to split off into slowly evolving genre-types, though nobody who’s played PC or console games for very long can reasonably criticise that particular development), games that give you weeks or months of enjoyment without ever forcing a penny from your pocket unless you’re actually deeply engaged enough to want to pay up to get something more. Most of F2P’s bone fide hits fit into this category, in fact; games like Supercell’s Clash of Clans or Hay Day, GungHo’s Puzzle & Dragons and, yes, even King’s Candy Crush Saga, which is held aloft unfairly as an example of F2P scurrilousness, yet has never extracted a penny from 70 percent of the people who have finished (finished!) the game. That’s an absolutely enormous amount of shiny candy-matching enjoyment (while I don’t like the game personally, I don’t question that it’s enjoyment for those who play it so devotedly) for free.
Unfortunately, the negative image that has been built up by free-to-play threatens not just the nasty, exploitative games, but all the perfectly decent ones as well – from billion-grossing phenomena like Puzzle & Dragons to indie wunderkind like Crossy Road. If free-to-play as a “brand” becomes irreparably damaged, the consequences may be far-reaching.
A year ago, I’d have envisaged that the most dangerous consequence on the horizon was heavy-handed legislation – with the EU, or perhaps the USA, clamping down on F2P mechanisms in a half-understood way that ended up damaging perfectly honest developers along with two-bit scam merchants. I still think that’s possible; companies have ducked and dived around small bits of legislation (or the threat of small bits of legislation) in territories including Japan and the EU, but the hammer could still fall in this regard. However, I no longer consider that the largest threat. No, the largest threat is Apple; the company which did more than any other to establish F2P as a viable market remains the company that could pull the carpet out from underneath it entirely, and while I doubt that’s on the cards right now, the wind is certainly turning in that direction.
Apple’s decision to promote non-F2P titles on its store may simply be an editor’s preference; but given the growing negativity around F2P, it may also be a sign that customer anger over F2P titles on iOS is reaching receptive ears at Apple. Apple originally permitted free apps (with IAP or otherwise) for the simple reason that having a huge library of free software available to customers was a brilliant selling point for the iPhone and iPad. At present, that remains the case; but if the negativity around the perception of F2P games were ever to start to outweigh the positive benefits of all that free software, do not doubt that Apple would reverse course fast enough to make your head spin. Reckon that its 30 percent share of all those Puzzle & Dragons and Candy Crush Saga revenues would be enough to make it think twice? Reckon again; App Store revenue is a drop in the ocean for Apple, and if abusive F2P ever starts to significantly damage the public perception of Apple’s devices, it will ban the model (in part, at least) without a second thought to revenue.
Some of you, those who fully buy into the negative image of F2P, might think that would be a thing to celebrate; ding, dong, the witch is dead! That’s a remarkably short-sighted view, however. In truth, F2P has been the saviour of a huge number of game development jobs and studios that would otherwise have been lost entirely in the implosion of smaller publishers and developers over the past five years; it’s provided a path into the industry for a great many talented creative people, grown the audience for games unimaginably and has provided a boost not only to mobile and casual titles, but to core games as well – especially in territories like East Asia. Wishing harm on F2P is wishing harm on many thousands of industry jobs; so don’t wish F2P harm. Wish that it would be better; that way, everyone wins.
Spotted by GforGames site, in a GeekBench test results and running inside an unknown smartphone, MediaTek’s MT6795 managed to score 886 points in the single-core test and 4536 points in the multi-core test. These results were enough to put it neck to neck with the mighty Qualcomm Snapdragon 810 SoC tested in the LG G Flex 2, which scored 1144 points in the single-core and 4345 in the multi-core test. While it did outrun the MT6795 in the single-core test, the multi-core test was clearly not kind on the Snapdragon 810.
The unknown device was running on Android Lollipop OS and packed 3GB of RAM, which might gave the MT6795 an edge over the LG G Flex 2.
MediaTek’s octa-core MT6795 was announced last year and while we are yet to see some of the first design wins, recent rumors suggested that it could be powering Meizu’s MX5, HTC’s Desire A55 and some other high-end smartphones. The MediaTek MT6795 is a 64-bit octa-core SoC clocked at up to 2.2GHz, with four Cortex-A57 cores and four Cortex-A53 cores. It packs PowerVR G6200 graphics, supports LPDDR3 memory and can handle 2K displays at up to 120Hz.
As we are just a few days from Mobile World Congress (MWC) 2015 which will kick off in Barcelona on March 2nd, we are quite sure that we will see more info as well as more benchmarks as a single benchmark running on an unknown smartphone might not be the best representation of performance, it does show that MediaTek certainly has a good chip and can compete with Qualcomm and Samsung.
According to Toms Hardware one of the unexpected features of DirectX 12 is the ability to use Nvidia GPUs alongside AMD GPUs in multi-card configurations.
This is because DirectX 12 operates at a lower level than previous versions of the API it is able to treat all available video resources as one unit. Card model and brand makes no difference to a machine running DX12.
This could mean that the days of PC gamers having to decide between AMD or Nvidia could be over and they can pick their referred hardware from both companies and enjoy the best of both worlds. They will also be able to mix old and new cards.
However there might be a few problems with all this. Rather than worrying about your hardware optimization software developers will have to be on the ball to make sure their products work.
More hardware options means more potential configurations that games need to run on, and that could cause headaches for smaller studios.
Sony is expected to use more MediaTek application processors in upcoming Xperia smartphones.
According to Digitimes, the Japanese consumer electronics giant is planning to increase its reliance on MediaTek chips in entry-level and mid-range smartphones this year. There is still no word on high-end products, and it seems Qualcomm’s 800-series parts will continue to power Xperia flagships for the time being.
Sony is also working with a number of Taiwanese ODMs like Foxconn, FIH Mobile, Compal and Arima Communications. The company’s latest Xperia E4 smartphone was in fact outsourced to Arima.
As for Foxconn/FIH Mobile and Compal, they are said to be developing 4G models for Sony, which means they are supposed to cover the mid-range segment. Most of this new models are expected to be based on MediaTek’s new octa-core MT6752 processor, which packs 64-bit Cortex-A53 cores.
The affordable MT6752 has already found its way into a number of Chinese mid-range smartphones, as well big-brand devices like the HTC Desire 826 and Acer Liquid Jade S.
A year or two ago, it seemed that doom and gloom reigned over the prospects for “core” gaming. With smartphones and tablets becoming this decade’s ubiquitous gaming devices, casual and social games ascendant and free-to-play established as just about the only effective way to make money from the teeming masses swarming to gaming for the first time, dire predictions abounded about the death of game consoles, the decline of paid-for games and the dwindling importance of “core” gamers to the games industry at large.
This week’s headlines speak of a different narrative – one that’s become increasingly strong as we’ve delved into what 2015 has to offer. Sony’s financial figures look pretty good, buoyed partially by the weakness of the Yen but notably also by the incredible success of the PlayStation 4 – a console which more aggressive commentators were reading funeral rites for before it was even announced. Both of the PS4′s competitors, incidentally, ended 2014 (and began 2015) in a stronger sales position than they were in 12 months previously, with next-gen home consoles overall heading for the 40 million sales mark in pretty much record time.
Then there’s the software story of the week; the startling sales of Grand Theft Auto V, which thanks to ten million sales of the PS4 and Xbox One versions of the game, have now topped 45 million units. That’s an incredible figure, one which suggests that this single game has generated well over $2 billion in revenue thus far; the GTA franchise as a whole must, at this point, be one of the most valuable entertainment franchises in existence, comparable in revenue terms to the likes of Star Wars or the Marvel Cinematic Universe.
Look, this is basically feel-good stuff for the games business; “hey guys, we’re doing great, our biggest franchise is right up there with Hollywood’s finest and these console sales are a promise of a solid future”. Stories like this used to turn up all the time back when games were genuinely struggling to be recognised as a valid and important industry alongside TV, music and film. Nowadays, that struggle has been internalised; it’s worth stepping back every now and then from the sheer enormity of figures like Apple and Samsung’s smartphone sales, or Puzzle & Dragons’ revenue (comparable to GTAV’s, but whether that means the game can birth a successful franchise or sustain itself long-term is another question entirely), or the number of players engaged with top F2P games, to remind ourselves that there’s still huge success happening in the “traditional” end of the market.
The take-away, perhaps, is that this isn’t a zero-sum game. The great success of casual and social games, first on Facebook and now on smartphones, isn’t that they’ve replaced core games, cannibalising the existing high-value market; it’s that they’ve acquired a whole new audience for themselves. Sure, there’s overlap, but there’s little evidence to suggest that this overlap results in people engaging less with core games; I, for one, have discovered that many smartphone F2P games have a core loop that fits nicely into the match-making and loading delays for Destiny’s Crucible.
That’s not to say that changes to the wider business haven’t resonated back through the “core” games space. The massive success of a game like GTAV has a dark side; it reflects the increasing polarisation of the high-end games market, in which successful games win bigger than ever, but games which fail to become enormous hits find themselves failing utterly. There’s no mid-market any more; you’re either a complete hit or a total miss. Developers have lamented the loss of the “AA” market (as distinct from the “AAA” space) for some time; that loss is becoming increasingly keenly felt as enormous budgets, production values and financial pressures come to bear on a smaller and smaller line-up of top-tier titles. Several factors drove the death of AA, with production costs and team sizes being major issues, but the rise of casual games and even of increasingly high-quality indie titles undoubtedly played a role – creating whole new market sectors that cost far less to consumers than AA titles had done.
It’s not just success that’s been polarised by this process; it’s also risk. At the high-end of the market, risk is simply unacceptable, such are the enormous financial figures at play. Thus it’s largely left to the low-end – the indie scene, the flood of titles appearing on the App Store, on Steam and even on the likes of PlayStation Vita – to take interesting risks and challenge gaming conventions. Along the way, some of the talented creators involved in these scenes are either trying to engage new audiences, or to engage existing audiences in new ways; sometimes experimenting with gameplay and interactive, sometimes with narrative and art style, sometimes with business model or distribution.
All of which leads me to explain why I keep writing “core” games, with inverted commas around “core”; because honestly, I’m increasingly uncertain what this term means. It used to refer to specific genres, largely speaking those considered to have special resonance for geeky guys; gory science fiction FPS games, high fantasy RPGs, complex beat-’em-ups and shoot-’em-ups, graphic survival horror titles, war-torn action games. Then, for a while, the rise of F2P seemed to make the definition of “core” shimmer and reform itself; now it meant “games people pay for up front, and the kind of people who pay for those games”.
Now? Now, who knows what “core” really means? League of Legends is certainly something you have to be pretty damn deeply involved with to enjoy, but it’s free-to-play; so is Hearthstone, which is arguably not quite so “core” but still demands a lot of attention and focus. There are great games on consoles – systems whose owners paid hundreds of dollars for a devoted gaming machine – which are free-to-play. There are games on mobile phones that cost money up front and are intricate and engrossing. There are games you can download for free on your PC, or pick up for a few dollars on Steam, that explore all sorts of interesting and complex niches of narrative, of human experience and of the far-flung corners of what it means to play a “game”. Someone who sits down for hours unravelling the strands of a text adventure written in Twine; are they “core”? Someone who treats retro gaming like a history project, travelling back through the medium’s tropes and concepts to find their origin points; are they “core”? How about Frank Underwood in House of Cards, largely disinterested in games but picking up a violent shooter to work out frustrations on his Xbox in the evenings; is he a “core gamer”?
Don’t get me wrong; this fuzzing of the lines around the concept of “core” is, to my mind, a vital step in the evolution of our medium. That the so-called “battle” between traditional business models and F2P, between AAA studios and indies, between casual and core, was not a zero-sum game and could result in the expansion of the entire industry, not the destruction of one side or another, has been obvious from the outset. What was less obvious and took a little more time to come to pass was that not only would each of those sides not detract from the others; they would actually learn from one another and help to fuel one another’s development. New creative outlooks, new approaches to interactivity, new thoughts on social and community aspects of gaming, new ideas about business models and monetisation; these all mingle with one another and help to make up for the creative drought at the top of the AAA industry (and increasingly, at the top of the F2P industry, too) by providing a steady feed of new concepts and ideas from below.
It’s fantastic and very positive that the next-gen consoles are doing well and that GTAV has sold so many copies (dark thoughts regarding the polarisation of AAA success aside); but it’s wrong, I think, to just look at this as being “hey, core gaming is doing fine”. Games aren’t made up of opposed factions, casual at war with core; it’s a spectrum, attracting relevant audiences from across the board. Rather than pitting GTAV against Puzzle and Dragons, I’d rather look at the enormous success of both games as being a sign of how well games are doing overall; rather than stacking sales of next-gen consoles against sales of smartphones and reheating old arguments about dedicated game devices vs multi-purpose devices, I’d rather think about the enormous addressable audience that represents overall. As the arguments about casual or F2P gaming “destroying” core games start to fade out, let’s take this opportunity to rid ourselves of some of our more meaningless distinctions and categories for good.
AMD’s first 14nm processors are codenamed Summit Ridge and they are reportedly based on an all-new architecture dubbed Zen.
Information on the new architecture and the Summit Ridge design is still very sketchy. According to Sweclockers, the chips will feature up to eight CPU cores, support for DDR4 memory and TDPs of up to 95W.
Summit Ridge will use a new socket, designated FM3. This suggests we are looking at A-series APUs, but there is no word on graphics and the eight-core design points to proper FX-series CPUs – we simply do not know at this point. It is also possible that Summit Ridge is a Vishera FX replacement, but on an FM socket rather than an AM socket.
Of course, AMD Zen should end up in more products than one, namely in APUs and Opteron server parts. The new architecture has been described as a “high-performance” design and will be manufactured using the Samsung-GlobalFoundries 14nm node.
As for the launch date, don’t hold your breath – the new parts are expected to show up in the third quarter of 2016, roughly 18 months from now.
New evidence coming from two LinkedIn profiles of AMD employees suggest that AMD’s upcoming Radeon R9 380X graphics card which is expected to be based on the Fiji GPU will actually use High-Bandwidth Memory.
Spotted by a member of 3D Center forums, the two LinkedIn profiles mention both the R9 380X by name as well as describe it as the world’s firts 300W 2.5D discrete GPU SoC using stacked die High-Bandwidth Memory and silicon interposer. While the source of the leak is quite strange, these are more reliable than just rumors.
The first in line is the profile of Ilana Shternshain, an ASIC Physical Design Engineer, which has been behind the Playstation 4 SoC, Radeon R9 290X and R9 380X, which is described as the “largest in ‘King of the hill’ line of products.”
The second LinkedIn profile is the one from AMD’s System Architect Manager, Linglan Zhang, which was involved in developing “the world’s first 300W 2.5D discrete GPU SOC using stacked die High Bandwidth Memory and silicon interposer.”
Earlier rumors suggest that AMD might launch the new graphics cards early this year as the company is under heavy pressure from Nvidia’s recently released, as well as the upcoming, Maxwell-based graphics cards.
We want to make sure that you realize that 20nm GPUs won’t be coming at all. Despite the fact that Nvidia, Qualcomm, Samsung and Apple are doing 20nm SoCs, there won’t be any 20nm GPUs.
From what we know AMD and Nvidia won’t be releasing 20nm GPUs ever, as the yields are so bad that it would not make any sense to manufacture them. It is not economically viable to replace 28nm production with 20nm.
This means the real next big thing technology will be coming with 16nm / 14nm FinFET from TSMC and GlobalFoundries / Samsung respectively, but we know that AMD is working on Caribbean Islands and Fiji as well, while Nvidia has been working on its new chip too.
This doesn’t mean that you cannot pull a small miracle in 28nm, as Nvidia did that back in September 2014 with Maxwell and proved that you can make a big difference with optimization on the same manufacturing process, in case when the new node is not an option.
Despite the lack of 20nm chips we still think that next gen Nvidia and AMD chips bring some innovations and make you want to upgrade in order to buy it to play the latest games on FreeSync or G-Sync monitors, or in 4K/UHD resolutions.
While we can’t get a real handle on when Microsoft might reveal the VR headset that they have had in development, we have learned from our sources that it is well into development and some selected developers already have developmental prototypes.
It is hard to say when Microsoft might actually reveal the new VR headset and technology, but it would seem that GDC or E3 would be the likely events to see it introduced. We do know that Microsoft is targeting 2015 to move the VR headset into mass production and it is thought that we will see versions for both the Xbox One and PC. Though we expect the PC version to come a little after the Xbox One version.
Rumor has it that the same development team that worked on the Surface tablet are the team that has taken on this project as well.
Detractors of free-to-play have been having a good few weeks, on the surface at least. There’s been a steady drip-feed of articles and statements implying that premium-priced games are gaining ground on mobile and tablet devices, with parents in particular increasingly wary of F2P game mechanics; a suggestion from SuperData CEO Joost van Dreunen that the F2P audience has reached its limits; and, to top it off, a move by Apple to replace the word “Free” with a button labelled “Get” in the App Store, a response to EU criticism of the word Free being applied to games with in-app purchases.
Taken individually, each of these things may well be true. Premium-priced games may indeed be doing better on mobile devices than before; parents may indeed be demonstrating a more advanced understanding of the costs of “free” games, and reacting negatively to them. Van Dreunen’s assertion that the audience for F2P has plateaued may well be correct, in some sense; and of course, the EU’s action and Apple’s reaction is unquestionable. Yet to collect these together, as some have attempted, and present them as evidence of a turning tide in the “battle” between premium and free games, is little more than twisting the facts to suit a narrative in which you desperately want to believe.
Here’s another much-reported incident which upsets the apple cart; the launch of an add-on level pack for ustwo’s beautiful, critically acclaimed and much-loved mobile game Monument Valley. The game is a premium title, and its level pack, which added almost as much content as the original game again, cost $2. This charge unleashed a tide of furious one-star reviews slamming the developers for their greed and hubris in daring to charge $2 for a pack of painstakingly crafted levels.
This is a timely and sobering reminder of just how deeply ingrained the “content is free” ethos has become on mobile and tablet and platforms. To remind you; Monument Valley was a premium game. The furious consumers who viewed charging for additional content as a heinous act of money-grubbing were people who had already paid money for the game, and thus belong to the minority of mobile app customers willing to pay for stuff up front; yet even within this group the scope of their willingness to countenance paying for content is extremely limited (and their ire at being forced to do so is extraordinary).
Is this right? Are these consumers desperately wrong? It doesn’t matter, to be honest; it’s reality, and every amateur philosopher who fancies himself the Internet’s Immanuel Kant can talk about their theories of “right” pricing and value in comment threads all day long without making a whit of difference to the reality. Mobile consumers (and increasingly, consumers on other platforms) are used to the idea that they get content for free, through fair means or foul. We could argue the piece about whether this is an economic inevitability in an era of almost-zero reproduction and distribution costs, as some commentators believe, but the ultimate outcome is no longer in question. Consumers, the majority of them at least, expect content to be free.
F2P, for all that its practitioners have misjudged and overstepped on many occasions, is a fumbling attempt to answer an absolutely essential question that arises from that reality; if consumers expect content to be free, what will they pay for? The answer, it transpires, is quite a lot of things. Among the customers who wouldn’t pay $2 for a level pack are probably a small but significant number who wouldn’t have blinked an eye at dropping $100 on in-game currency to speed up their ability to access and complete much the same levels, and a much more significant percentage who would certainly have spent roughly that $2 or more on various in-game purchases which didn’t unlock content, per se, but rather smoothed a progression curve that allowed access to that content. Still others might have paid for customisation or for merchandise, digital or physical, confirming their status as a fan of the game.
I’m not saying necessarily that ustwo should have done any of those things; their approach to their game is undoubtedly grounded in an understanding of their market and their customers, and I hope that the expansion was ultimately successful despite all the griping. What I am saying is that this episode shows that the problem F2P seeks to solve is real, and the notion that F2P itself is creating the problem is naive; if games can be distributed for free, of course someone will work out a way to leverage that in order to build audience, and of course consumers will become accustomed to the idea that paying up front is a mugs’ game.
If some audiences are tiring of F2P’s present approach, that doesn’t actually remove the problem; it simply means that we need new solutions, better ways to make money from free games. Talking to developers of applications and games aimed at kids reveals that while there’s a sense that parents are indeed becoming very wary of F2P – both negative media coverage and strong anti-F2P word of mouth among parents seem to be major contributing factors – they have not, as some commentators suggest, responded by wanting to buy premium software. Instead, they want free games without any in-app purchases; they don’t buy premium games and either avoid or complain bitterly about in-app purchases. Is this reasonable? Again, it barely matters; in a business sense, what matters is figuring out how to make money from this audience, not questioning their philosophy of value.
Free has changed everything, yet that’s not to argue with the continued importance of premium software either. I agree with SuperData’s van Dreunen that there’s a growing cleavage between premium and free markets, although I suspect that the audience itself overlaps significantly. I don’t think, however, that purchasers of premium games are buying quite the same thing they once were. Free has changed this as well; the emergence and rapid rise of “free” as the default price point has meant that choosing to pay for software is an action that exists in the context of abundant free alternatives.
On a practical level, those who buy games are paying for content; in reality, though, that’s not why they choose to pay. There are lots of psychological reasons why people buy media (often it’s to do with self-image and self-presentation to peers), and now there’s a new one; by buying a game, I’m consciously choosing to pay for the privilege of not being subjected to free software monetisation techniques. If I pay $5 for a game, a big part of the motivation for that transaction is the knowledge that I’ll get to enjoy it without F2P mechanisms popping up. Thus, even the absence of F2P has changed the market.
This is the paradigm that developers at all levels of the industry need to come to terms with. Charging people for content is an easy model to understand, but it’s a mistaken one; people don’t really buy access to content. People buy all sorts of other things that are wrapped up, psychologically, in a content purchase, but are remarkably resistant to simply buying content itself.
“I think there’s a bright future for charging premium prices for games – even on platforms where Free otherwise dominates, although it will always be niche there”
There’s so much of it out there for free – sure, only some through legitimate means, but again, this barely matters. The act of purchase is a complex net of emotions, from convenience (I could pirate this but buying it is easier) and perceived risk (what if I get caught pirating? What if it’s got a virus?), through to self-identity (I buy this because this is the kind of game people like me play) and broadcast identity (I buy this because I want people to know I play this kind of game), through to peer group membership (I buy this because it’s in my friends’ Steam libraries and I want to fit in) or community loyalty (I buy this because I’m involved with a community around the developer and wish to support it); and yes, avoidance of free-game monetisation strategies is a new arrow in that quiver. Again, actually accessing content is low on the list, if it’s even there at all, because even if that specific content isn’t available for free somewhere (which it probably is), there’s so much other free content out there that anyone could be entertained endlessly without spending a cent.
In this context, I think there’s a bright future for charging premium prices for games – even on platforms where Free otherwise dominates, although it will always be niche there – but to harness this, developers should try to understand what actually motivates people to buy and recognise the disconnect between what the developer sees as value (“this took me ages to make, that’s why it’s got a price tag on it”) and what the consumer actually values – which could be anything from the above list, or a host of other things, but almost certainly won’t be the developer’s sweat and tears.
That might be tough to accept; but like the inexorable rise of free games and the continuing development of better ways to monetise them, it’s a commercial reality that defies amateur philosophising. You may not like the audience’s attitude to the value of content and unwillingness to pay for things you consider to be valuable – but between a developer that accepts reality and finds a way to make money from the audience they actually have, and the developer who instead ploughs ahead complaining bitterly about the lack of the ideal, grateful audience they dream of, I know which is going to be able to pay the bills at the end of the month.