After less than two years at the helm of Zynga, Don Mattrick is on the move again. He’s picking up the best part of $20 million on his way out the revolving door, so don’t feel too bad for him, but after his catastrophic mis-management of the Xbox One’s development and launch, his failure to lift Zynga out of its post-IPO slump looks like yet another blot on the extremely expensive copybook of the former Microsoft executive.
There will be plenty of I-told-you-so’s over this news, but in truth, it wasn’t so predictable. Mattrick always looked like a better fit for Zynga than he was at Microsoft; the balls-up he made of the Xbox One could be attributed, if we’re feeling charitable, to having sensibilities far more in-tune with a broad mass-market than with the core audience a launching console needs to please. As such, the social- and (latterly) mobile-focused Zynga should have been a more suitable challenge for him; and indeed, while the company’s performance under his tenure hasn’t exactly been good, or even mediocre, there have been some important bright spots, most notably the (clever) acquisition of mobile specialists NaturalMotion, and the (achingly slow, but getting there) transition away from browser-based games to mobile platforms.
That the company’s performance in terms of finances and share price alike failed to pick up under Mattrick’s tenure, though, is something easily presented as an outright failure; and after the mess he made at Microsoft, it would be straightforward to roll our eyes at the spectacle of yet another overpaid exec with bugger all knowledge about games being given an enormous sack full of $100 bills with which to break his falls after a gentle defenestration from his latest failure. That’s not entirely an unfair characterisation, but not entirely fair either, I suspect, because no sooner was Mattrick out of the CEO’s chair than Zynga founder and former CEO Mark Pincus had his backside back in the seat – and that, to me, sets off all sorts of alarm bells.
For a CEO to depart and to be instantly replaced is not entirely unusual, but it does raise some eyebrows; for a CEO to depart after a short and unfruitful period, only to be replaced instantly by the company founder whom they replaced in the role, strongly suggests that the company founder never actually took their fingers out of the pie. The reasons for Pincus leaving the CEO’s role were pretty clear; he was broadly seen by investors as a millstone around the company’s neck, his dictatorial nature, inflexibility and tendency to make stupid, inflammatory statements in public being pretty damaging to a firm struggling to recover from an overheated IPO. That he’s been waiting in the wings for Mattrick to depart raises troubling questions over just who has actually been running Zynga for the past two years; it’s not hard to imagine Mattrick finding his hands tied by the presence of a highly opinionated and influential founder who never actually wanted to let go of the reins in the first place, something which might explain a good deal about the tardy pace of Zynga’s turnaround.
The markets, unsurprisingly, reacted to the news by dumping Zynga stock; the founder who was doing a miserable job of being CEO has stepped back up to replace the new guy who was also doing a poor (but better) job of being CEO? It’s a net negative, not merely because for all his faults Mattrick was broadly considered a better CEO than Pincus, but because it suggests that the upper echelons of Zynga’s management are in absolute disarray.
Still, though; even this latest dump of Zynga’s stock is only going to bring the company back to depths it already plumbed back in February… and in December… oh, and last October, too. Zynga is bumping along the bottom, and has been since mid-2012, in share price terms. It looked like it might climb off the floor around the start of 2014, but since the middle of last year it’s traded at around $3 and under; frankly, the depths to which it can fall off the back of this executive-revolving-door farce are severely limited by the fact that it’s already at rock bottom. That’s because Zynga’s real problems, although they may well start from its dysfunctional management, are much more deeply rooted. The company hasn’t had a hit in years – even more problematically, it has never had a bona fide, honest to god hit on a mobile platform. It bought some smaller developers with mobile hits, and then failed to grow or develop them (in some embarrassing cases, they flopped almost immediately after being purchased). FarmVille, a game franchise whose existence you had entirely forgotten until I just mentioned its name at the start of this sentence, remains the jewel in Zynga’s crown. The “Games” section on Zynga’s website reads embarrassingly like a blow-by-blow account of games everyone seemed to be into for a few months, years ago.
There might – might – be light at the end of the tunnel. It would be easy to dismiss Zynga’s new Great Hope, the action strategy title Dawn of Titans, as absolute folly; the “Clash of Clans” market, so utterly saturated that top games in the category have ended up spending millions on Superbowl commercials to try and soak up the last remaining dregs of the market, is a horrible place to be launching a new product. Dawn of Titans, though, is just branded and presented a little like Clash of Clans; the game itself looks quite different, and most of all, it’s from the genuinely brilliant NaturalMotion. If I were to pick the most likely source of a Zynga renewal, it would be NaturalMotion; one can only hope that, in a similar manner to the Activision / Blizard relationship, Zynga’s management has the good sense to let NaturalMotion do their jobs and keep their paws off to the greatest extent possible.
Still; the fate of a company is a big thing to rest on one development team, no matter how talented. What Zynga needs is a hit, undoubtedly. What it really, really needs is hits – plural. Once upon a time, there was a formula for social gaming success, based on just the right balance of compelling game design (yes, Farmville really was compelling in its own way), pulling the right social levers, monetising intelligently and with a light touch, and spreading through some fairly nakedly unpleasant viral approaches on Facebook. Mark Pincus got that formula down perfectly; that is, thus far, the only thing that Zynga has ever executed perfectly. That formula, of course, is part of the history books now; it doesn’t work any more and never will again.
The new formula that Zynga needs to discover is actually a much trickier one, one which game companies have struggled with for decades; the formula for making great games people actually want to play and actually want to recommend to their friends. The CEO who could potentially turn Zynga into a company where that happens would have to create an environment of intense creativity and freedom, utilising the short development cycles, rapid prototyping and start-up style Minimum Viable Product soft-launching strategies enabled by mobile platforms to let creators exercise their imaginations and try many different ideas in search of the hits; a CEO who truly valued creativity and understood how to let it thrive. Mark Pincus wasn’t that CEO first time around. He’s going to have to work hard to prove that Pincus 2.0 is any better.
A rumor fresh out of Korea suggests Nvidia might be tapping Samsung as a GPU foundry, but there is a catch.
The news comes from Korea Times, which quoted a source familiar with the matter. The source told the paper that the deal involved Nvidia GPUs, but it was a small contract.
GPUs on 14nm? Something doesn’t add up
If you are sceptical, don’t worry – so are we. While Nvidia is expected to use Samsung for its upcoming Tegra SoCs, this is the first time we heard it could also try using Samsung’s and Globalfoundries’ FinFET nodes for GPUs.
This would obviously place Nvidia in an awkward situation, as it would basically be using an AMD spinoff to build its chips.
There is another problem with the report. The source claims the deal is valued at “a few million dollars”, which would be barely enough to cover the cost of a single tape-out. In fact, it might not be enough at all. The cost of taping out FinFET chips is relatively high, as these are cutting edge nodes.
Tegras or GPUs?
We doubt Nvidia will ditch TSMC for Samsung, at least as far as GPUs are concerned.
The most logical explanation would be that Nvidia has inked a deal with Samsung to tape-out Tegra chips rather than GPUs. The source may have simply mixed them up, that would explain everything.
Still, there is always a chance Nvidia is looking at alternative nodes for its GPUs, but we just don’t see it happening, at least not yet.
The deal that helped Crytek recover from its recent financial difficulties was Amazon, according to a report from Kotaku.
The online retail giant signed a licensing deal for CryEngine, Crytek’s proprietary game engine. Sources within the company put the deal’s value at between $50 million and $70 million, and suggested that Amazon may be using it as the bedrock for a proprietary engine of its own.
However Amazon uses the technology, though, the importance of the deal for Crytek cannot be overstated. Last year, during the summer, it became apparent that all was not well at the German developer. Employees hadn’t been fully paid in months, leading to an alleged staff walkout in its UK office, where a sequel to Homefront was in development. Koch Media acquired the Homefront IP and its team shortly after.
When the company’s management eventually addressed the rumors, it had already secured the financing necessary to take the company forward. No details of the deal were offered, but it’s very likely that Crytek got the money it needed from Amazon.
We have contacted Crytek to confirm the details, but it certainly fits with the perception that Amazon could emerge as a major creator of game content. It has snapped up some elite talent to do just that, it acquired Twitch for a huge sum of money, and it has been very open about where it plans to fit into the overall market.
Semiconductor sales reached $340 billion in 2014, up eight percent on the year before and led by Intel, according to a report by analyst house Gartner.
The figures represent positive growth for semiconductors powering all device categories, unlike in 2013 when Gartner said that application-specific integrated circuits, discrete components and micro-components all declined.
Intel was the top ranking chip manufacturer in 2014, seeing a return to growth after two years of revenue decline and retaining the number one market share position for the 23rd consecutive year with 15 percent.
Gartner claimed that this was down to a recovery in PC production, which saw sales up just under eight percent to $52bn.
Samsung was the second best in terms of semiconductor revenue last year, according to Gartner’s report, with $34bn in revenue and a market share of 10 percent. However, the 2013-2014 growth was almost double that of Intel’s at 13 percent.
Qualcomm came in third with revenues last year of $19bn, growing 12 percent compared with 2013, but with a much lower market share of almost six percent.
The top 25 semiconductor vendors’ combined revenue increased by almost 12 percent, which was more than the overall industry’s growth, and accounted for 72 percent of total market revenue, up from 70 percent in 2013.
Across the industry, the memory market was the best performer for the second year in a row, Gartner said, growing 17 percent.
This meant that the rest of the market achieved only five percent growth, according to Gartner research vice president Andrew Norwood.
“As a group, DRAM vendors performed best, lifted by the booming DRAM market which saw revenue increase 32 percent to $46bn, surpassing the all-time high of $41.8bn in 1995,” he said.
Last year also saw more merger and acquisition activity among the major semiconductor vendors than the previous year, Gartner said, and some announced deals are still to close in 2015.
Among the most significant was Avago Technologies’ acquisition of LSI, propelling the company into the top 25 semiconductor vendors for the first time.
MStar Semiconductor merged with MediaTek after a prolonged merger, and ON Semiconductor acquired Aptina Imaging.
“After adjusting for closed M&A activity, the top 25 semiconductor vendors grew at nine percent,” Gartner said.
AMD must face claims that it committed securities fraud by hiding problems with the bungled 2011 launch of Llano that eventually led to a $100 million write-down, a US court has decided.
According to Techeye US District Judge Yvonne Gonzales Rogers said plaintiffs had a case that AMD officials misled them by stating in the spring of 2011 and will have to face a full trial.
The lawsuit was over the Llano chip, which AMD had claimed was “the most impressive processor in history.”
AMD originally said that the product launch would happen in the fourth quarter of 2010, sales of the Llano were delayed because of problems at the company’s chip manufacturing plant.
The then Chief Financial Officer Thomas Seifert told analysts on an April 2011 conference call that problems with chip production for the Llano were in the past, and that the company would have ample product for a launch in the second quarter.
Press officers for AMD continued to insist that there were no problems with supply, concealing the fact that it was only shipping Llanos to top-tier computer manufacturers because it did not have enough chips.
By the time AMD ramped up Llano shipments in late 2011, no one wanted them any more, leading to an inventory glut.
AMD disclosed in October 2012 that it was writing down $100 million of Llano inventory as not shiftable.
Shares fell nearly 74 percent from a peak of $8.35 in March 2012 to a low of $2.18 in October 2012 when the market learned the extent of the problems with the Llano launch.
During a presentation at the Game Developers Conference earlier this month, Boss Fight Entertainment’s Damion Schubert suggested the industry to drop the term “whales,” calling it disrespectful to the heavy spenders that make the free-to-play business model possible. As an alternative, he proposed calling them “patrons,” as their largesse allows the masses to enjoy these works that otherwise could not be made and maintained.
After his talk, Schubert spoke with GamesIndustry.biz about his own experiences with heavy spending customers. During his stint at BioWare Austin, Schubert was a lead designer on Star Wars: The Old Republic as it transitioned from its original subscription-based business model to a free-to-play format.
“I think the issue with whales is that most developers don’t actually psychologically get into the head of whales,” Schubert said. “And as a result, they don’t actually empathize with those players, because most developers aren’t the kind of person that would shell out $30,000 to get a cool speeder bike or whatnot… I think your average developer feels way more empathy for the free players and the light spenders than the whales because the whales are kind of exotic creatures if you think about them. They’re really unusual.”
Schubert said whales, at least those he saw on The Old Republic, don’t have uniform behavior patterns. They weren’t necessarily heavy raiders, or big into player-vs-player competition. They were just a different class of customer, with the only common attribute being that they apparently liked to spend money. Some free-to-play games have producers whose entire job is to try to understand those customers, Schubert said, setting up special message boards for that sub-community of player, or letting them vote on what content should be added to a game next.
“When you start working with these [customers], there’s a lot of concern that they are people who have gambling problems, or kids who have no idea of the concept of money,” Schubert said.
But from his experience on The Old Republic, Schubert came to understand that most of that heavy spending population is simply people who are legitimately rich and don’t have a problem with devoting money to something they see as a hobby. Schubert said The Old Republic team was particular mindful of free-to-play abuse, and had spending limits placed to protect people from credit card fraud or kids racking up unauthorized charges. If someone wanted to be a heavy spender on the game, they had to call up customer service and specifically ask for those limits to be removed.
“If you think about it, they wanted to spend money so much that they were willing to endure what was probably a really annoying customer service call so they could spend money,” Schubert said.
The Old Republic’s transition from a subscription-based model to free-to-play followed a wider shift in the massively multiplayer online genre. Schubert expects many of the traditional PC and console gaming genres like fighting games and first-person shooters to follow suit, one at a time. That said, free-to-play is not the business model of the future. Not the only one, at least.
“I think the only constant in the industry is change,” Schubert said when asked if the current free-to-play model will eventually fall out of favor. “So yeah, it will shift. And it will always shift because people find a more effective billing model. And the thing to keep in mind is that a more effective billing model will come from customers finding something they like better… I think there is always someone waiting in the wings with a new way of how you monetize it. But I do think that anything we’re going to see in the short term, at least, is probably going to start with a great free experience. It’s just so hard to catch fire; there are too many competitive options that are free right now.”
Two upstart business models Schubert is not yet sold on are crowdfunding and alpha-funding. As a consumer, he has reservations about both.
“The Wild West right now is the Kickstarter stuff, which is a whole bunch of companies that are making their best guess about what they can do,” Schubert said. “Many of them are doing it very, very poorly, because it turns out project management in games is something the big boys don’t do very well, much less these guys making their first game and trying to do it on a shoestring budget. I think that’s a place where there’s a lot more caveat emptor going on.”
Schubert’s golden rule for anyone thinking of supporting a Kickstarter is to only pledge an amount of money you would be OK losing forever with nothing to show for it.
“At the end of the day, you’re investing on a hope and a dream, and by definition, a lot of those are just going to fail or stall,” Schubert said. “Game development is by definition R&D. Every single game that gets developed is trying to find a core game loop, trying to find the magic, trying to find the thing that will make it stand out from the 100 other games that are in that same genre. And a lot of them fail. You’ve played 1,000 crappy games. Teams didn’t get out to make crappy games; they just got there and they couldn’t find the ‘there’ there.”
He wasn’t much kinder to the idea of charging people for games still in an early stage of development.
“I’m not a huge fan of Early Access, although ironically, I think the MMO genre invented it,” Schubert said. “But on the MMOs, we needed it because there are things on an MMO that you cannot test without a population. You cannot test a 40-man raid internally. You cannot test large-scale political systems. You cannot test login servers with real problems from different countries, server load and things like that. Early Access actually started in my opinion, with MMOs, with the brightest of hopes and completely and totally clean ideals.”
Schubert has funded a few projects in Early Access, but said he wound up getting unfinished games in return. Considering he works on unfinished games for a living, he doesn’t have much patience for them in his spare time, and has since refrained from supporting games in Early Access.
“I genuinely think there are very few people in either Kickstarter or Early Access that are trying to screw customers,” Schubert said. “I think people in both those spaces are doing it because they love games and want to be part of it, and it’s hard for me to find fault in that at the end of the day.”
Facebook’s Messenger app mostly been used for keeping in touch with friends. Now people can also use it to send each other money. In the future, it could become a platform which other apps could use, if recent rumors prove true.
This Wednesday and Thursday at its F8 conference in San Francisco, Facebook will show off new tools to help third-party developers build apps, deploy them on Facebook and monetize them through Facebook advertising.
Among those tools might be a new service for developers to publish content or features of their own inside Messenger, according to a TechCrunch article. Facebook did not respond to requests for comment.
Such a service could make Messenger more useful, if the right developers sign on. Search features, photo tools or travel functions could be incorporated into Messenger and improve users’ chats around events or activities.
However, Messenger already lets users exchange money, and it also handles voice calls. Layer on more services and Messenger could become bloated and inconvenient to use.
In other words, making Messenger a platform would be a gamble.
A more versatile Messenger could generate new user data Facebook could leverage for advertising, helping it counter a user growth slowdown in recent quarters. It could also boost Facebook’s perennial efforts to increase participants in its developer platform and the number of users of its third-party apps.
Even if Facebook doesn’t turn Messenger into a platform at F8, it will likely do so in the future, said John Jackson, an IDC analyst focused on mobile business strategies. For the same reasons Facebook might turn Messenger into a platform, it could do the same for other apps like WhatsApp or Instagram, he said.
“The objective is to enrich and multiply the nature of interactions on the platform,” providing valuable data along the way, he said.
Microsoft’s Xbox division is in a much healthier state today than it was a year ago. It’s had a tough time of it; forced to reinvent itself in an excruciating, public way as the original design philosophy and marketing message for the Xbox One transpired to be about as popular as breaking wind in a crowded lift, resulting in executive reshuffles and a tricky refocus of the variety that would ordinarily be carried out pre-launch and behind closed doors. Even now, Xbox One remains lumbered with the fossilised detritus of its abortive original vision; Kinect 2.0 has been shed, freeing up system resources and marking a clear departure for the console, but other legacy items like the expensive hardware required for HDMI input and TV processing are stuck right there in the system’s hardware and cannot be extracted until the inevitable redesign of the box rolls around.
All the same, under Phil Spencer’s tenure as Xbox boss, the console has achieved a better turnaround than any of us would have dared to expect – but that, perhaps, speaks to the low expectations everyone had. In truth, despite the sterling efforts of Spencer and his team, Xbox One is still a console in trouble. A great holiday sales season was widely reported, but actually only happened in one territory (the USA, home turf that was utterly dominated by Xbox in the previous generation), was largely predicated on a temporary price-cut and was somewhat marred by serious technical issues that dogged the console’s headline title for the season, the Master Chief Collection.
Since the start of 2015, things have settled down to a more familiar pattern once more; PS4 consistently outsells Xbox One, even in the USA, generally racking up more than double the sales of its competitor in global terms. Xbox One sells better month-on-month than the Wii U, but that’s cold comfort indeed given that Nintendo’s console is widely seen as an outright commercial failure, and Nintendo has all but confirmed that it will receive an early bath, with a replacement in the form of Nintendo NX set to be announced in 2016. Microsoft isn’t anywhere near that level of crisis, but nor are its sales in 2015 thus far outside the realms of comparison with Wii U – and their installed bases are nigh-on identical.
The odd thing about all of this, and the really positive thing that Microsoft and its collaborators like to focus on, is that while the Xbox One looks like it’s struggling, it’s actually doing markedly better than the Xbox 360 was at the same point in its lifespan – by my rough calculations, Xbox One is about 2.5 million units north of the installed base of Xbox 360 at the same point. Oddly, that makes it more comparable with PS3, which was, in spite of its controversy-dogged early years, a much faster seller out the door than Microsoft’s console. The point stands, though, that in simple commercial terms Xbox One is doing better than Xbox 360 did – it just happens that PS4 is doing better than any console has ever done, and casting a long shadow over Microsoft’s efforts in the process.
The problem with this is that I don’t think very many people are under the impression that Microsoft, whose primary businesses lie in the sale of office and enterprise software, cloud services and operating systems, is in the videogames business just in order to turn a little profit. Ever since the departure of Steve Ballmer and the appointment of the much more business-focused Satya Nadella as CEO, Xbox has looked increasingly out of place at Microsoft, especially as projects like Surface and Windows Phone have been de-emphasised. If Xbox still has an important role, it’s as the flag-bearer for Microsoft’s brand in the consumer space; but even at that, the “beach-head in the living room” is far less important now that Sony no longer really looks like a competitor to Microsoft, the two companies having streamlined themselves to a point where they don’t really focus on the same things any more. Besides, Xbox One is being left behind in PS4′s dust; even if Microsoft felt like it needed a beach-head in the living room, Xbox wouldn’t exactly be doing the job any more.
But wait, we’ve been here before, right? All those rumours about Microsoft talking to Amazon about unloading the Xbox division came to nothing only a few short months ago, after all. GDC saw all manner of talk about Xbox One’s place in the Windows 10 ecosystem; Spencer repeatedly mentioned the division having Nadella’s backing, and then there’s the recent acquisition of Minecraft, which surely seems like an odd thing to take place if the position of Xbox within the Microsoft family is still up in the air. Isn’t this all settled now?
Perhaps not, because the rumours just won’t stop swirling that Microsoft had quietly put Xbox on the market and is actively hunting for a buyer. During GDC and ever since, the question of who will come to own Xbox has been posed and speculated upon endlessly. The console’s interactions with Windows 10, including the eventual transition of its own internal OS to the Windows 10 kernel; the supposed backing of Nadella; the acquisition of Minecraft; none of these things have really deterred the talk that Microsoft doesn’t see Xbox as a core part of its business any more and would be happy to see it gone. The peculiar shake-up of the firm’s executive team recently, with Phil Harrison quietly departing and Kudo Tsunoda stepping up to share management of some of Microsoft Game Studios’ teams with Phil Spencer, has added fuel to the fire; if you hold it up at a certain angle to the light, this decision could look like it’s creating an internal dividing line that would make a possible divestment easier.
Could it happen? Well, yes, it could – if Microsoft is really determined to sell Xbox and can find a suitable bidder, it could all go far more smoothly than you may imagine. Xbox One would continue to be a part of the Windows 10 vision to some extent, and would probably get its upgrade to the Windows 10 kernel as well, but would no longer be Microsoft hardware – not an unfamiliar situation for a company whose existence has mostly been predicated on selling operating systems for other people’s hardware. Nobody would buy Xbox without getting Halo, Forza and various other titles into the bargain, but Microsoft’s newly rediscovered enthusiasm for Windows gaming would suggest a complex deal wherein certain franchises (probably including Minecraft) remain with Microsoft, while others went off with the Xbox division. HoloLens would remain a Microsoft project; it’s not an Xbox project right now and has never really been pushed as an Xbox One add-on, despite the immediate comparisons it prompted with Sony’s Morpheus. Xbox games would still keep working with the Azure cloud services (Microsoft will happily sell access to that to anyone, on any platform), on which framework Xbox Live would continue to operate. So yes, Xbox could be divorced from Microsoft, maintaining a close and amiable relationship with the requisite parts of the company while taking up residence in another firm’s stable – a firm with a business that’s much more in line with the objectives of Xbox than Microsoft now finds itself to be.
“None of Xbox’ rivals would be in the market to buy such a large division, and no game company would wish to lumber itself with a platform holder business. Neither Apple nor Google make the slightest sense as a new home for Xbox either”
This, I think, is the stumbling block. I’m actually quite convinced that Microsoft would like to sell the Xbox division and has held exploratory talks to that end; I’m somewhat less convinced, but prepared to believe, that those talks are continuing even now. However, I’m struggling to imagine a buyer. None of Xbox’ rivals would be in the market to buy such a large division, and no game company would wish to lumber itself with a platform holder business. Neither Apple nor Google make the slightest sense as a new home for Xbox either; the whole product is distinctly “un-Apple” in its ethos and approach, while Google is broadly wary of hardware and almost entirely disinterested in games.
Amazon was the previously mentioned suitor, and to my mind, remains the most likely purchaser – but it’s seemingly decided to pursue its own strategy for living room devices for now, albeit with quite limited success. I could see Amazon still “exploring options” in this regard with Microsoft, but if that deal was going to happen, I would have expected it to happen last year. Who else is out there, then? Netflix, perhaps, is an interesting outside possibility – the company’s branching out into creating original TV content as well as being a platform for third-party content would be a reasonably good cultural match for the Game Studios aspect of Xbox, but it’s hard to imagine a company that has worked so hard to divorce itself from the entire physical product market suddenly leaping back into it with a large, expensive piece of hardware.
This, I think, is what ultimately convinces me that Xbox is staying at Microsoft – for better or worse. It might be much better for Xbox if it was a centrepiece project for a company whose business objectives matched its strengths; but I don’t think any such company exists to take the division off Microsoft’s hands. Instead, Spencer and his talented team will have to fight to ensure that Xbox remains relevant and important within Microsoft. Building its recognition as a Windows 10 platform is a good start; figuring out other ways in which Xbox can continue to be a great game platform while also bringing value to the other things that Microsoft does is the next challenge. Having turned around public perception of the console to a remarkable degree, the next big task for the Xbox team will be to change perceptions within Microsoft itself and within the investor community – if Xbox is stuck at Microsoft for the long haul, it needs to carve itself a new niche within a business vision that isn’t really about the living room any more.
Pascal is Nvidia’s next generation architecture and it is coming after Maxwell of course. The company says it will launch next year, but details are still sketchy.
According Nvidia CEO Jen Hsun Huang, it is coming with Mixed Precision and this is the new architecture that will succeed Maxwell. Nvidia claims that the new GPU core has its own architectural benefits.
3D memory or High Bandwidth Memory (HBM), is a big thing and Jen Hsun Huang claims 32GB is possible with the new architecture, compared to 12GB on the new Maxwell-based Titan X. This is a staggering increase from the current standard of 4GB per card, to 12GB with Titan, and probably up to 32GB with Pascal. NV Link should enable a very fast interconnect that has 5 times the performance of PCI Express, which we all use right now. More memory and more bandwidth are obviously needed for 4K/UHD gaming.
Huang also shared some very rough estimates, including Convolution Compute performance, will be four times faster with FP16 precision in mixed precision mode. The 3D memory offers a six-fold increase in GPU to memory bandwidth.
Convolution and bandwidth at the front, and bandwidth to convolution at the back of the GPU, should get be five times faster than on Maxwell cards. It is complex fuzzy logic that is hard to explain with so few details shared by Nvidia about the Pascal architecture.
The width update interconnect with NV Link should get you a twofold performance increase and when you when you multiply these two numbers, Nvidia ends up with a comes to 10x compute performance increase compared to Maxwell, at least in what Nvidia CEO calls the “CEO bench”.
He warned the audience that this is a very rough estimate. This 10X number mainly targets deep learning, as it will be able to teach the deep learning network ten times faster. This doesn’t meant that the GPU offers 10 times the GPU performance for gaming compared to Maxwell, not even close, we predict.
Volta made it back to the roadmap and currently it looks like the new architecture will be introduced around 2018, or about three years from now.
Nintendo has formed a comprehensive new alliance with DeNA that will make every one of the company’s famous IPs available for mobile development.
The bedrock of the deal is a dual stock purchase, with each company buying ¥22 billion ($181 million) of the other’s treasury shares. That’s equivalent to 10 per cent of DeNA’s stock, and 1.24 per cent of Nintendo. The payments will complete on April 2, 2015.
What this will ultimately mean for the consumer is Nintendo IP on mobile, “extending Nintendo’s reach into the vast market of smart device users worldwide.” There will be no ports of existing Nintendo games, according to information released today, but, “all Nintendo IP will be eligible for development and exploration by the alliance.” That includes the “iconic characters” that the company has guarded for so long.
No details on the business model that these games and apps will be released under were offered, though Nintendo may well be reluctant to adopt free-to-play at first. The information provided to the press emphasised the “premium” experiences Nintendo currently offers on platforms like Wii U and 3DS. Admittedly, that could be interpreted in either direction.
However, Nintendo and DeNA are planning an online membership service that will span Nintendo consoles, PC and smart devices. That will launch in the autumn this year.
This marks a significant change in strategy for Nintendo, which has been the subject of reports about plans to take its famous IPs to mobile for at least a year. Indeed, the company has denied the suggestion on several occasions, even as it indicated that it did have plans to make mobile a part of its core strategy in other ways.
Analysts have been offering their reflections on the deal, with the response from most being largely positive.
“Nintendo’s decision to partner with DeNA is a recognition of the importance of the games app audience to the future of its business,” said IHS head of gaming Piers Harding-Rolls. “Not only is there significant revenue to be made directly from smartphone and tablet consumers for Nintendo, app ecosystems are also very important in reaching new customers to make them aware of the Nintendo brand and to drive a new and broader audience to its dedicated console business. Last year IHS data shows that games apps were worth $26 billion in consumer spending globally, with handheld console games worth only 13 per cent of that total at $3.3 billion.
“The Nintendo-DeNA alliance is a good fit and offers up a number of important synergies for two companies that are no longer leaders in their respective segments.
“DeNA remains one of the leading mobile games company’s in Japan and, we believe, shares cultural similarities with Nintendo, especially across its most popular big-brand content. The alliance gives Nintendo access to a large audience in its home market, which remains very important to its overall financial performance. Japanese consumers spend significantly more per capita on mobile games than in any other country and it remains the biggest market for both smartphone and handheld gaming. While the partnership gives Nintendo immediate potential to grow its domestic revenues through this audience, gaining access to DeNA’s mobile expertise is important too to realise this potential.
“This alliance makes commercial sense on many levels – the main challenge will be knitting together the cultures of both companies and aligning the speed of development and iteration that is needed in the mobile space with Nintendo’s more patient and systematic approach to games content production. How the new games are monetised may also provide a challenge considering the general differences in models used in retail for Nintendo and through in-app purchases for DeNA.”
In a livestreamed press conference regarding the DeNA deal, Nintendo’s Satoru Iwata reassured those in attendance that the company was still committed to “dedicated video game systems” as its core business. To do that, he confirmed that the company was working on a new console, codenamed “NX”.
“As proof that Nintendo maintains strong enthusiasm for the dedicated game system business let me confirm that Nintendo is currently developing a dedicated game platform with a brand new concept under the development codename NX,” he said.
“It is too early to elaborate on the details of this project but we hope to share more information with you next year.”
There’s not a lot to argue with the consensus view that Valve had the biggest and most exciting announcement of GDC this year, in the form of the Vive VR headset it’s producing with hardware partner HTC. It may not be the ultimate “winner” of the battle between VR technologies, but it’s done more than most to push the whole field forwards – and it clearly sparked the imaginations of both developers and media in San Francisco earlier this month. Few of those who attended GDC seem particularly keen to talk about anything other than Vive.
From Valve’s perspective, that might be just as well – the incredibly strong buzz around Vive meant that it eclipsed Valve’s other hardware-related announcement at GDC, the unveiling of new details of the Steam Machines initiative. Ordinarily, it might be an annoying (albeit very high-quality) problem to have one of your announcements completely dampen enthusiasm for the other; in this instance, it’s probably welcome, because what trickled out of GDC regarding Steam Machines is making this look like a very stunted, unloved and disappointing project indeed.
To recap briefly; Steam Machines is Valve’s attempt to create a range of attractive, small-form-factor PC hardware from top manufacturers carrying Valve’s seal of approval (hence being called “Steam Machines” and quite distinctly not “PCs”), running Valve’s own gaming-friendly flavour of the Linux OS, set up to connect to your living room TV and controlled with Valve’s custom joypad device. From a consumer standpoint, they’re Steam consoles; a way to access the enormous library of Steam content (at least the Linux-friendly parts of it) through a device that’s easy to buy, set up and control, and designed from the ground up for the living room.
That’s a really great idea, but one which requires careful execution. Most of all, if it’s going to work, it needs a fairly careful degree of control; Valve isn’t building the machines itself, but since it’s putting its seal of approval on them (allowing them to use the Steam trademark and promoting them through the Steam service), it ought to have the power to enforce various standards related to specification and performance, ensuring that buyers of Steam Machines get a clear, simple, transparent way to understand the calibre of machine they’re purchasing and the gaming performance they can expect as a result.
Since the announcement of the Steam Machines initiative, various ways of implementing this have been imagined; perhaps a numeric score assigned to each Machine allowing buyers to easily understand the price to performance ratio on offer? Perhaps a few distinct “levels” of Steam Machine, with some wiggle room for manufacturers to distinguish themselves, but essentially giving buyers a “Good – Better – Best” set of options that can be followed easily? Any such rating system could be tied in to the Steam store itself, so you could easily cross-reference and find out which system is most appropriate for the kind of games you actually want to play.
In the final analysis, it would appear that Valve’s decision on the myriad possibilities available to it in this regard is the worst possible cop-out, from a consumer standpoint; the company’s decided to do absolutely none of them. The Steam Machines page launched on the Steam website during GDC lists 15 manufacturers building the boxes; many of those manufacturers are offering three models or more at different price and performance levels. There is absolutely no way to compare or even understand performance across the different Steam Machines on offer, short of cross-referencing the graphics cards, processors, memory types and capacities and drive types and capacities used in each one – and if you’ve got the up-to-date technical knowledge to accurately balance those specifications across a few dozen different machines and figure out which one is the best, then you’re quite blatantly going to be the sort of person who saves money by buying the components separately and wouldn’t buy a Steam Machine in a lifetime.
“Valve seems to have copped out entirely from the idea of using its new systems to make the process of buying a gaming PC easier or more welcoming for consumers”
In short, unless there’s a pretty big rabbit that’s going to be pulled out of a hat between now and the launch of the first Steam Machines in the autumn, Valve seems to have copped out entirely from the idea of using its new systems to make the process of buying a gaming PC easier or more welcoming for consumers – and in the process, appears to have removed pretty much the entire raison d’etre of Steam Machines. The opportunity for the PC market to be grown significantly by becoming more “console-like” isn’t to do with shoving PC components into smaller boxes; that’s been happening for years, occasionally with pretty impressive results. Nor is it necessarily about reducing the price, which has also been happening for some years (and which was never going to happen with Steam Machines anyway, as Valve is of no mind to step in and become a loss-leading platform holder).
Rather, it’s about lowering the bar to entry, which remains dizzyingly high for PC gaming – not financially, but in knowledge terms. A combination of relatively high-end technical knowledge and of deliberate and cynical marketing-led obfuscation of technical terminology and product numbering has meant that the actual process of figuring out what you need to buy in order to play the games you want at a degree of quality that’s acceptable is no mean feat for an outsider wanting to engage (or re-engage) with PC games; it’s in this area, the simplicity and confidence of buying a system that you know will play all the games marketed for it, that consoles have an enormous advantage over the daunting task of becoming a PC gamer.
Lacking any guarantee of performance or simple way of understanding what sort of system you’re buying, the Steam Machines as they stand don’t do anything to make that process easier. Personally, I ought to be slap bang in the middle of the market for a Steam Machine; I’m a lapsed PC gamer with a decent disposable income who is really keen to engage with some of the games coming out in the coming year (especially some of the Kickstarted titles which hark back to RPGs I used to absolutely adore), but I’m totally out of touch with what the various specifications and numbers mean. A Steam Machine that I could buy with the confidence that it would play the games I want at decent quality would be a really easy purchase to justify; yet after an hour flicking over and back between the Steam Machines page launched during GDC and various tech websites (most of which assume a baseline of knowledge which, in my case, is a good seven or eight years out of date), I am no closer to understanding which machine I would need or what kind of price point is likely to be right for me. Balls to it; browser window full of tabs looking at tech spec mumbo-jumbo closed, PS4 booted up. Sale lost.
This would be merely a disappointment – a missed opportunity to lower the fence and let a lot more people enjoy PC gaming – were it not for the extra frisson of difficulty posed by none other than Valve’s more successful GDC announcement, the Vive VR headset. You see, one of the things that’s coming across really clearly from all the VR technology arriving on the market is that frame-rate – silky-smooth frame-rate, at least 60FPS and preferably more if the tech can manage it – is utterly vital to the VR experience, making the difference between a nauseating, headache-inducing mess and a Holodeck wet dream. Suddenly, the question of PC specifications has become even more important than before, because PCs incapable of delivering content of sufficient quality simply won’t work for VR. One of the appealing things about a Steam Machine ought to be the guarantee that I’ll be able to plug in a Vive headset and enjoy Valve’s VR, if not this year then at some point down the line; yet lacking any kind of certification that says “yes, this machine is going to be A-OK for VR experiences for now”, the risk of an expensive screw-up in the choice of machine to buy seems greater than ever before.
I may be giving Steam Machines a hard time unfairly; it may be that Valve is actually going to slap the manufacturers into line and impose a clear, transparent way of measuring and certifying performance on the devices, giving consumers confidence in their purchases and lowering the bar to entry to PC gaming. I hope so; this is something that only Valve is in a position to accomplish and that is more important than ever with VR on the horizon and approaching fast. The lack of any such system in the details announced thus far is bitterly disappointing, though. Without it, Steam Machines are nothing more than a handful of small form-factor PCs running a slightly off-kilter OS; of no interest to hobbyists, inaccessible to anyone else, and completely lacking a compelling reason to exist.
Microsoft has been running its “personal assistant” Cortana on its Windows phones for a year, and will put the new version on the desktop with the arrival of Windows 10 this autumn. Later, Cortana will be available as a standalone app, usable on phones and tablets powered by Apple Inc’s iOS and Google Inc’s Android, people familiar with the project said.
“This kind of technology, which can read and understand email, will play a central role in the next roll out of Cortana, which we are working on now for the fall time frame,” said Eric Horvitz, managing director of Microsoft Research and a part of the Einstein project, in an interview at the company’s Redmond, Washington, headquarters. Horvitz and Microsoft declined comment on any plan to take Cortana beyond Windows.
The plan to put Cortana on machines running software from rivals such as Apple andGoogle, as well as the Einstein project, have not been reported. Cortana is the name of an artificial intelligence character in the video game series “Halo.”
They represent a new front in CEO Satya Nadella’s battle to sell Microsoft software on any device or platform, rather than trying to force customers to use Windows. Success on rivals’ platforms could create new markets and greater relevance for the company best known for its decades-old operating system.
The concept of ‘artificial intelligence’ is broad, and mobile phones and computers already show dexterity with spoken language and sifting through emails for data, for instance.
Still, Microsoft believes its work on speech recognition, search and machine learning will let it transform its digital assistant into the first intelligent ‘agent’ which anticipates users needs. By comparison, Siri is advertised mostly as responding to requests. Google’s mobile app, which doesn’t have a name like Siri or Cortana, already offers some limited predictive information ‘cards’ based on what it thinks the user wants to know.
Virtual reality is being viewed as the next big thing, and not just for gaming. Facebook has talked about how VR headsets will let friends communicate as if they’re together in the same room.
A team of engineers at Google is building a version of Android for virtual reality applications, the Wall Street Journal reported Friday, citing two people familiar with the project. “Tens of engineers” and other staff are said to be working on the project.
The OS would be freely distributed, the report said, mirroring the strategy that made Android the most popular OS for smartphones. The report didn’t provide any launch plans, and Google didn’t immediately respond to a request for comment.
With rivals investing heavily in VR, it would make sense for Google to build its own OS. Facebook has referred to VR as the next big platform after mobile, and it bought headset maker Oculus VR last year for US$2 billion.
They see VR as the future because it provides an immersive experience for gaming, entertainment, communications, and perhaps other applications not thought of yet. It’s still a way from mass adoption, though, and some people report getting nausea from VR systems, or just don’t want a big display strapped to their head.
Still, there are lots of players in the space. Samsung has Gear VR, Sony has Project Morpheus, and Microsoft has HoloLens.
Google, clearly, doesn’t want to be left behind.
Leading the Android VR effort are veterans Clay Bavor and Jeremy Doig, the Wall Street Journal said. Bavor helped to create Google Cardboard, the company’s low-tech virtual reality viewer that attracted attention at last year’s Google I/O conference.
“Today we’re happy to announce … 64-bit builds for Firefox Developer Edition are now available on Windows, adding to the already supported platforms of OS X and Linux,” wrote Dave Camp, director of developer tools, and Jason Weathersby, a technical evangelist, in a post to a company blog.
Firefox 38′s Developer Edition, formerly called “Aurora,” now comes in both 32- and 64-bit version for Windows. Currently, Mozilla’s schedule, which launches a newly-numbered edition every six weeks, has Firefox 38 progressing through “Beta” and “Central” builds, with the latter — the most polished edition — releasing May 12.
Cook and Weathersby touted the 64-bit Firefox as faster and more secure, the latter due to efficiency improvements in Windows’ anti-exploit ASLR (address space layout randomization) technology in 64-bit.
The biggest advantage of a 64-bit browser on a 64-bit operating system is that it can address more than the 4GB of memory available to a 32-bit application, letting users keep open hundreds of tabs without crashing the browser, or as Cook and Weathersby pointed out, run larger, more sophisticated Web apps, notably games.
Mozilla is the last 32-bit holdout among the top five providers of browsers.
Google shipped a Windows 64-bit Chrome in August 2014 and one for OS X in November, while Apple’s Safari and Microsoft’s Internet Explorer (IE) have had 64-bit editions on OS X and Windows since 2009 and 2006, respectively. Opera Software, the Norwegian browser maker known for its same-named desktop flagship, also offers a 64-bit edition on Windows.
Lenovo’s 8-inch Tab 2 A8 will ship in June starting at $129, with a 64-bit version of Android 5.0 and a 64-bit quad-core processor from MediaTek. It was one of three tablets Lenovo announced ahead of the Mobile World Congress trade show in Barcelona.
Sixty-four-bit tablets have a few advantages. They can support more memory and therefore make light work of multimedia-intensive apps such as games, as well as apps that use encryption for security. More 64-bit Android apps are in development, so a 64-bit tablet also provides some future-proofing.
Only a handful of 64-bit Android tablets are on sale today. One of the best known is Google’s Nexus 9, which sells for $399.99 in the Google Play store. Many more are expected as vendors deploy Android 5.0 more broadly and as more 64-bit processors become available. Lenovo’s Tab 2 A8 could prompt other vendors to drive down prices for their own 64-bit Android tablets.
The Tab 2 A8 is 9 millimeters thick, weighs 360 grams and will offer eight hours of battery life, according to Lenovo. The $129 model has Wi-Fi only, while a $179 model will have integrated LTE. It doesn’t look like the LTE model will be offered in the U.S., however.
The tablet has a 5-megapixel rear-facing camera, a 2-megapixel front-facing camera and 1GB of RAM. It has a maximum of 16GB of storage that can be expanded to 32GB with a Micro-SD card.
With a 720p screen, Lenovo has compromised on the display to keep the price low.
Tablet shipments flattened last year after years of strong growth, and the 64-bit Android tablets could spur people to upgrade from older models.
Apple had an early start in 64-bit tablets with the iPad Air, but the low-priced tablets could shift the market in Android’s favor.
Lenovo also announced the 10-inch Tab 2 A10, which has a 64-bit processor but will initially ship with a 32-bit version of Android, version 4.4. The tablet will start shipping in April and users will be able to upgrade their devices to Android 5.0 in June, Lenovo said.