Firefox’s user share on all platforms — desktop and mobile — has spiraled downward in the last two months as its desktop browser continued to bleed and its attempt to capture users on smartphones failed to move the needle, new data shows.
Apple’s Safari fared almost as poorly since April, also losing significant user share, with a continued decline on mobile and a sudden slide on the desktop to blame.
During June, 17.3% of those who went online surfed the Web using a mobile browser, according to Aliso Viejo, Calif.-based Net Applications. Mobile browsing’s climb of nearly 6 percentage points in the last 12 months represented a growth rate of 52%.
As in April, when Computerworld last analyzed desktop + mobile browser user share, June’s numbers put the hurt on Mozilla most of all: Firefox’s total user share — the combination of desktop and mobile — was 12.9% for June, its lowest level since Computerworld began tracking the metric five years ago, and 1.2 percentage points lower than just two months before.
Mozilla’s problem remains an inability to attract a mobile audience. Although the company has long offered Firefox on Android and its Firefox OS has begun to appear on a limited number of smartphones, its mobile share was just seven-tenths of one percent, about three times smaller than the second-from-the-bottom mobile browser, Microsoft’s Internet Explorer.
Firefox hasn’t helped itself of late, either. For the eighth straight month, the desktop version lost user share in June, falling by 1.3 percentage points to end with 15.4%. In the last year, Firefox’s desktop user share as measured by Net Applications has dropped 3.6 percentage points, representing a 19% decline.
The timing is terrible, as Mozilla’s current contract with Google ends in November. That deal, which assigned Google’s search engine as the default for most Firefox customers, has generated the bulk of Mozilla’s revenue. In 2012, for example, the last year for which financial data was available, Google paid Mozilla an estimated $272 million, or 88% of all Mozilla income.
Going into this year’s contract renewal talks, Mozilla will be bargaining from a much weaker position, down 43% in total user share since June 2011.
Apple remained behind Mozilla in desktop + mobile browser user share, with a cumulative 12.3%, down from 13.1% two months earlier. Nearly two-third of its total was credited to Safari on iOS.
AT&T Inc announced that it will be the first U.S. wireless carrier to sell LG Electronics’ smartwatch, a wrist watch that connects to Android phones and answers voice commands, and goes on sale on July 11.
The announcement comes as demand for wearable devices surges. Juniper Research estimates the value of the wearable device market this year at $1.5 billion, up from $800 million in 2013.
The LG “G Watch,” which was made in partnership with Google Inc, will sell for $229 and available for pre-orders starting July 8.
It has a 1.65 inch display screen that delivers notifications customers receive on their Android phones and can connect to calendars and applications.
“Because the LG G Watch works with so many of our Android smartphones, it should be a wearable device that appeals to a wide array of consumers,” Jeff Bradley, senior vice president of devices at AT&T, said in a statement.
“Its ability to anticipate your schedule and traveling needs will help you plan your schedule more efficiently while on-the-go.”
The announcement also comes as rumors swirl about the specifications on Apple Inc’s smartwatch, which has yet to be announced, but is expected as early as October.
Late last year, Frank Gibeau switched roles at Electronic Arts, moving from president of the PC and console-focused EA Labels to be the executive vice president of EA Mobile. Speaking with GamesIndustry International at E3 last month, Gibeau said he was enticed by the vast opportunity for growth in the mobile world, and the chance to shape the publisher’s efforts in the space.
“One of the things I enjoy doing is building new groups, new teams and taking on cool missions,” Gibeau said. “The idea was that EA is known as a console company, and for our PC business. We’re not particularly well known for our mobile efforts, and I thought it would be an awesome challenge to go in and marshal all the talent and assets of EA and, frankly, build a mobile game company.”
It might sound a little odd to hear Gibeau speaking of building a mobile game company at EA. After all, he described EA as “the king of the premium business model” in the mobile world not too long ago, when the company was topping charts with $7 apps like The Sims 3 or raking it in with paid offerings like Tetris, Monopoly, or Scrabble.
“Two years ago, we were number one on feature phones with the premium business model,” Gibeau said. “Smart devices come in, freemium comes in, and we’re rebuilding our business. I think we’ve successfully gotten back into position and we see a lot of opportunity to grow the business going forward, but if you had talked to me about two years ago and tried to speculate there would be a company called Supercell with that much share and that many games, we wouldn’t even have come close.”
Gibeau expects that pace of upheaval to continue in the mobile market, but some things seem set in stone. For example, Gibeau is so convinced that the days of premium apps are done, he has EA Mobile working exclusively on freemium these days.
“If you look at how Asia operates, premium just doesn’t exist as a business model for interactive games, whether it’s on PC or mobile devices. If you look at the opportunity set, if you’re thinking globally, you want to go freemium so you can capture the widest possible audience in Japan, Korea, China, and so on… With premium games, you just don’t get the downloads you do with a free game. It’s better to get as many people into your experience and trying it. If they connect with it, that’s great, then you can carry them for very long periods of time. With premium, given that there are so many free offerings out there, it’s very difficult to break through.”
Unfortunately for EA, its prior expertise is only so relevant in the new mobile marketplace. Its decades of work on PCs and consoles translated well to premium apps that didn’t require constant updating, but Gibeau said running live services is a very different task – one EA needs to get better at.
“Our challenge frankly is just mastering the freemium live service component of what’s happening in mobile,” Gibeau said. “That’s where we’re spending a lot of our time right now. We think we have the right IP. We have the right talent. We’ve got great production values. Our scores from users are pretty high. It’s really about being able to be as good as Supercell, King, Gungho, or some of these other companies at sustained live services for long periods of time. We have a couple games that are doing really well on that front, like The Simpsons, Sims Freeplay, and Real Racing, but in general I think that’s where we need to spend most of our time.”
As Gibeau mentioned, EA has already had some successes on that front, but its record isn’t exactly unblemished. The company launched a freemium reboot of Dungeon Keeper earlier this year and the game was heavily criticized for its aggressive monetization approach. In May, EA shuttered original developer Mythic.
“Dungeon Keeper suffered from a few things,” Gibeau said. “I don’t think we did a particularly good job marketing it or talking to fans about their expectations for what Dungeon Keeper was going to be or ultimately should be. Brands ultimately have a certain amount of permission that you can make changes to, and I think we might have innovated too much or tried some different things that people just weren’t ready for. Or, frankly, were not in tune with what the brand would have allowed us to do. We like the idea that you can bring back a brand at EA and express it in a new way. We’ve had some successes on that front, but in the case of Dungeon Keeper, that just didn’t connect with an audience for a variety of reasons.”
The Dungeon Keeper reboot wasn’t successful, but EA continues to keep the game up and running, having passed the live service responsibilities to another studio. It’s not because the company is hoping for a turnaround story so much as it’s just one more adaptation to running games with a live service model.
“If you watch some of the things we’ve been doing over the last eight or nine months, we’ve made a commitment to players,” Gibeau said. “We’re sincere and committed to that. So when you bring in a group of people to Dungeon Keeper and you serve them, create a live service, a relationship and a connection, you just can’t pull the rug out from under them. That’s just not fair. We can sustain the Dungeon Keeper business at its level for a very long time. We have a committed group of people who are playing the game and enjoying it. So our view is going to be that we’ll keep Dungeon Keeper going as long as there’s a committed and connected audience to that game. Are we going to sequel it? Probably not. [Laughs] But we don’t want to just shut stuff off and walk away. You can’t do that in a live service environment.”
Much like EA’s institutional experience, there’s only so much of Gibeau’s past in the console and PC core gaming world that is directly relevant to today’s mobile space. But as the segment grows out of what he calls the “two guys in a garage” stage, EA’s organizational expertise will be increasingly beneficial.
“These teams are starting to become fairly sizeable,” Gibeau said, “and the teams and investment going into these games is starting to become much greater. Now they’re much, much less than you see on the console side, but there’s a certain rigor and discipline in approach from a technology and talent standpoint that’s very applicable… If you look at these devices, they will refresh their hardware and their computing power multiple times before you see a PlayStation 5. And as you see that hardware get increasing power and capability on GPU and CPU levels, our technology that we set up for gen 4 will be very applicable there. We’re going to be building technologies like Frostbite that operate on mobile devices so we can create richer, more immersive experiences on mobile.”
Even if mobile blockbusters like Candy Crush Saga aren’t exactly pushing the hardware, Gibeau said there’s still a need for all that extra horsepower. With the increased capabilities of multitasking on phones, he sees plenty of room for improvement before the industry runs up against diminishing returns on the CPU and GPU front. He likens today’s mobile titles to late-generation PS2 games, with PS3 and Xbox 360-level games just around the corner.
“As it relates to games, this is like black and white movies with no sound at this point, in terms of the type of games we’ve created,” Gibeau said. “We’re just starting to break through on the really big ideas is my personal view. If you look at games like Clash of Clans, Real Racing, even Candy Crush, they’re breaking through in new ways and spawning all types of new products that are opening up creativity and opportunities here. So I think computing power is just something we’ll continue to leverage.”
The best part for Gibeau is that the hard work of convincing people to buy these more powerful devices isn’t falling solely on the shoulders of game developers.
“The beauty of it is it’s not a single-use device,” Gibeau said, “so people will be upgrading them for a better camera, better video capability, different form factor, different user inputs, as a wearable… I think there’s so much pressure from an innovation standpoint between Samsung, Apple, Google, and Windows coming in, that they’ll continue to one up each other and there will be a very vibrant refresh cycle for a very long period of time. The screens get better, the computing power gets better, and I don’t have to worry about just games doing it like we were in the console business. Those were pretty much just games consoles; these are multi-use devices. And the beauty of it is there will be lots of different types of applications coming in and pushing that upgrade path.”
The news was originally hinted at in January by VP of Samsung Visual Display John Ryu. However, the company has now confirmed that the reason for the shutdown is because no one was buying them. With the rise of LED and OLED 4K TVs in the last five years or so, that’s no great surprise, really.
“We plan to continue our PDP TV business until the end of this year, due to changes in market demands. We remain committed to providing consumers with products that meet their needs, and will increase our focus on growth opportunities in UHD TV’s and Curved TV’s,” a Samsung Electronics spokesperson stated.
The Samsung PNF8500 series plasma TV was the last plasma set that Samsung started shipping and was released in 2013. The firm didn’t launch anymore plasma TVs during 2014, and now we know why. Whether the PNF8500 series will be discontinued in the near future remains a mystery, but we can assume that it won’t stay on the shelves for much longer after Samsung’s announcement today.
Samsung’s decision to pull plasma TVs from its production line follows that of Panasonic, which pulled the plug on plasma TVs last year. This means that the plasma TV market has now been left in the hands of LG, which is likely to carry on production as it now has a plasma TV market monopoly.
ARM has announced two programs to assist Android’s ascent into the 64-bit architecture market.
The first of those is Linaro, a port of the Android Open Source Project to the 64-bit ARMv8-A architecture. ARM said the port was done on a development board codenamed “Juno”, which is the second initiative to help Android reach the 64-bit market.
The Juno hardware development platform includes a system on chip (SoC) powered by a quad-core ARM Cortex-A53 CPU and dual-core ARM Cortex-A57 CPU in an ARM big.little processing configuration.
Juno is said to be an “open, vendor neutral ARMv8 development platform” that will also feature an ARM Mali-T624 graphics processor.
Alongside the news of the 64-bit initiatives, ARM also announced that Actions Semiconductor of China signed a license agreement for the 64-bit ARM Cortex-A50 processor family.
“Actions provides SoC solutions for portable consumer electronics,” ARM said. “With this IP license, Actions will develop 64-bit SoC solutions targeting the tablet and over-the-counter (OTT) set top box markets.”
The announcements from ARM come at an appropriate time, as it was only last week that Google announced the latest version of its Android mobile operating system, Android L, which comes with support for 64-bit processors. ARM’s latest developments mean that Android developers are likely to take advantage of them in the push to take Android to the 64-bit market.
Despite speculation that it would launch as Android 5.0 Lollipop, Google outed its next software iteration on Wednesday last week as simply Android L, touting the oddly-named iteration as “the largest update to the operating system yet”.
A Facebook spokesman acknowledged that the experiment on nearly 700,000 unwitting users in 2012 had upset users and said the company would change the way it handled research in future.
The study, to find if Facebook could alter the emotional state of users and prompt them to post either more positive or negative content, has caused a furor on social media, including Facebook itself.
“We’re aware of this issue and will be speaking to Facebook, as well as liaising with the Irish data protection authority, to learn more about the circumstances,” the Information Commissioner’s Office (ICO) spokesman Greg Jones said in an email.
Jones said it was too early to tell exactly what part of the law Facebook may have infringed. The company’s European headquarters is in Ireland.
The Commissioner’s Office monitors how personal data is used and has the power to force organizations to change their policies and can levy fines of up to 500,000 pounds ($839,500).
Facebook said it would work with regulators and was changing the way it handled such cases.
“It’s clear that people were upset by this study and we take responsibility for it,” Facebook spokesman Matt Steinfeld said in an email.
“The study was done with appropriate protections for people’s information and we are happy to answer any questions regulators may have.”
One of the top three malware programs affecting businesses in the second quarter is a worm that takes advantage of the large number of companies still using Windows XP, Trend Micro has warned.
The worm, dubbed DOWNAD, also known as Conficker, can infect an entire network via a malicious URL, spam email, or removable drive. Windows XP is particularly susceptible to this threat because it is known to exploit the MS08-067 Server service vulnerability in order to execute arbitrary code.
DOWNAD also has its own domain generation algorithm (DGA) that allows it to create randomly-generated URLs. It then connects to these created URLs to download files to the system. Trend Micro said that around 175 IP addresses are found to be related to the DOWNAD worm and that these IP addresses use various ports and are randomly generated via the DGA capability of DOWNAD.
“During our monitoring of the spam landscape, we observed that in Q2, more than 40 percent of malware related spam mails are delivered by machines infected by DOWNAD worm,” said Trend Micro anti-spam research engineer Maria Manly in a blog post.
“A number of machines are still infected by this threat and leveraged to send the spammed messages to further increase the number of infected systems. And with Microsoft ending the support for Windows XP this year, we can expect that systems with this OS can be infected by threats like DOWNAD.”
The security company warned that spam campaigns delivering FAREIT, MYTOB, and LOVGATE payloads in email attachments are attributed to DOWNAD infected machines. FAREIT is a malware family of information stealers that download variants of the Zeus Trojan, while MYTOB is an old family of worms known for sending a copy of itself in spam attachments.
The other top sources of spam with malware are the CUTWAIL botnet, together with Gameover ZeuS (GoZ). Manly said CUTWAIL was actually previously used to download GoZ malware but now a malware called UPATRE employs GoZ malware or variants of ZBOT which have peer-to-peer functionality.
“In the last few weeks we have reported various spam runs that abused Dropbox links to host malware like UPATRE,” Manly said. “We also spotted a spammed message in the guise of voice mail that contains a Cryptolocker variant. The latest we have seen is a spam campaign with links that leveraged CUBBY, a file storage service, this time carrying a banking malware detected as TSPY_BANKER.WSTA.”
According to Manly, cybercriminals and threat actors are probably abusing file storage platforms to mask their malicious activities and go undetected in the system and network.
“As spam with malware attachment continues to proliferate, so is spam with links carrying malicious files. The continuous abuse of file hosting services to spread malware appears to have become a favoured infection vector of cyber criminals most likely because this makes it more effective given that the URLs are legitimate thereby increasing the chance of bypassing anti-spam filters,” she added.
IBM announced the general release for its cloud development platform as a service (PaaS) offering Bluemix.
The Cloud Foundry suite, which has been in open beta since February, now boasts more than 50 services and has been adopted at a rate which, the company claims, makes it one of “the largest Cloud Foundry deployments in the world”.
IBM launched Bluemix as part of a $1bn investment in cloud computing, and it’s a framework that allows users to create cloud based applications, slotting in open source services from the IBM Cloud Marketplace.
New services added to the list include Gameification, to add incentives to apps, MQ Light, a messaging service, Redis Cloud, allowing Redis users to run datasets easily and Sonian Email Archive which allows mining of big data from emails and their attachments.
“Organizations are rapidly moving new, innovative apps to Cloud Foundry’s scalable, user-friendly model,” said James Watters, vice president of Cloud Foundry Product and Ecosystem at Pivotal.
“IBM Bluemix furthers the Cloud Foundry vision for rapid app development, as well as the ability for developers to work easily between platforms and tools from multiple providers.”
The investment in Bluemix stretches far beyond the technology infrastructure with over 80,000 consultants trained to advise developers on how to use it, and the first of a series of so-called Bluemix Garages opening in San Francisco as a place where developers from different companies can get advice both from IBM itself and through cross-pollenation of ideas from other companies.
The news coincides with the announcement of a new IBM data centre in London.
After a test period, Twitter said that it was globally deploying its “mobile app installs” program, which allows companies to promote their mobile apps in users’ feeds.
Twitter began testing the program with a limited number of advertisers in the U.S. in April — tests that the company says went well. Participants in that program included mobile ride-hailing service Lyft and games publisher Electronic Arts.
The program lets companies publish links to download mobile apps. These ads are meant to appear like regular posts in users’ feeds.
Mobile app ads have become very successful for Facebook, helping to drive the download of roughly 60 percent of the top-grossing apps in Apple’s App Store, according to Facebook.
Twitter, for its part, is looking to better monetize its service amid sagging user growth. The company has yet to turn a profit.
Twitter already lets advertisers target their ads by users’ interests, keywords, favorite TV programs, language and other criteria.
Advertisers promoting their mobile apps will be able to leverage those capabilities too, Twitter said.
Word on the street is that Redmond is releasing a smartwatch in the autumn. Another deep throat also told Tom’s Hardware claims that will be in October, which is the suspected launch date of the Apple iWatch. Microsoft’s watch looks vaguely interesting. It will have 11 sensors and will apparently not be much like any of the LG or Samsung smart watches which have been released.
The position of the screen is said to be on the inside of the wrist rather than the outside like a normal watch and more like a Nike Fuelband. What is more interesting is that the watch will have open APIs and cross platform capability, which will mean that it can talk to Android phones. This will make it a lot more flexible than Apple’s closed source model and will mean that Apple will depend on its marketing to make the watch popular. Not that it has done it any harm so far.
Apple convinced the world that Microsoft’s keyboardless netbooks were the way forward when there was an Apple logo on them and people were convinced they had been invented by Steve Jobs.
Cisco’s investigation into the malware identified a group of attacks by the same threat actor, with Cisco exposing the threat actor’s network after it had discovered a Microsoft Word document that downloaded and executed a secondary sample, which began beaconing to a command and control server.
“While basic, the Office Macro attack vector is obviously still working quite effectively,” Cisco technical lead Craig Williams said in a blog post. “When the victim opens the Word document, an On-Open macro fires, which results in downloading an executable and launching it on the victim’s machine.”
Williams said that this threat actor seemed to target high-profile, rich industries such as banking, oil, television and jewellery companies and victims of the attacks were duped into opening an email attachment in the form of an invoice, written specifically for the recipient.
“The message [was] a fairly simple phish email which includes a fake name and an attached Microsoft Word document. However, this was simply the outer layer of the onion so it’s best, we think, to start from the beginning,” Williams said.
“This particular phishing attempt was noticed in Cisco’s email corpus due to the email attachment’s poor block rates at most antivirus engines.”
“For the duration of this campaign there is one thing that remained consistent: at best, a few antivirus engines may have generically detected the attached malware but more often than not coverage was provided by a single vendor, or no coverage was provided at all.”
IT research firm Gartner has adjuste downwards its forecast for global IT spending by about one-third for this year, blaming a tougher competitive environment as well as pressure on vendors to lower prices.
Spending will rise 2.1% to $3.7 trillion in 2014, down from the 3.2% growth rate Gartner had predicted for 2014.
The downgraded forecast isn’t necessarily a cause for concern, said Gartner managing vice president Richard Gordon in an interview.
“In the context of an improving global economic situation, to have IT spending be anemic, in the low single digits, might be a surprise on the face of it,” he said. But customers aren’t necessarily cutting back on spending, Gordon said. “They’re getting better deals for their money and spending their money carefully.”
Data center system spending will be the slowest growing category in 2014, rising only 0.4% to $140 billion due to factors such as lower-cost storage options in the cloud and a move away from high-end server systems.
Devices spending will rise just 1.2% to $685 billion due to price cuts on mobile phones and tablets, Gartner said.
IT services revenue is expected to jump 3.8% this year to $967 billion after “weak vendor performance” in 2013, according to Gartner. Within this category, however, spending on IT outsourcing has been slowed by the ongoing price war between cloud storage vendors. Implementation services revenue is being constrained by customers choosing to conduct smaller projects.
Meanwhile, enterprise software revenue will rise 6.9% in 2014 to $321 billion, buoyed by stronger growth in infrastructure software sales but tempered by a slower rise in spending on applications, Gartner said.
The increase in connected devices in the so-called Internet of Things will help push software sales higher in coming years, Gordon said. “With IoT and digital business in general, you’ve got a lot more data out there that needs to be collected, stored and analyzed.”
Gran Turismo 7 had been rumored to already have been in development, but now Kazunori Yamauchi, CEO of Polyphony Digital has now confirmed officially that this is the case. Yamauchi revealed to Eurogamer that “we are working on the title, but added that he doesn’t think it will make it this year.”
In addition we learned that Gran Turismo 7 will not be getting a Prologue release this time around with the development team instead focusing on the full release. PS2 ear cars are likely to be included with Gran Turismo 7, but they may be upgraded to Premium, highly detailed models for this release. It is unlikely that they will be getting rid of any of the standard cars because each car has its own fans.
Sony has not officially announced Gran Turismo 7, but Yamauchi already had told fans back in September to expect a Gran Turismo release on the PlayStation 4 in a year or two.
Intel, who is Imagination’s biggest shareholder, announced it was selling overnight a 9 per cent stake in the company held by its venture capital arm. The sale will cut Intel’s holding in Imagination to about 4 per cent. It was not as if Imagination was doing badly. The company announced that it was making a fortune in new licensing deals both for smartphones and new products.
In fact is probably because Imagination is doing so well that Intel felt it was safe to off-load the shares. The shareprice was double what Intel paid for it and the chipmaker was laughing all the way to the bank having lost nothing on the deal. The US chipmaker had built its stake in 2009 as an apparent move to block potential bids from rivals such as Apple, Imagination’s biggest customer, which still has an 8.6 per cent holding.
Intel made it clear that it continues to have a business relationship with the company, having licensed several generations of Imagination Technologies’ graphics and video processing cores.
Breaking up is hard to do, as the Carpenters famously crooned; right now, Microsoft is discovering, not for the first time this generation, that dumping your old ideas is just as tough as dumping your clingy ex. It may be the right thing to do, but it’s a fraught process and one that it’s tough to emerge from without attracting plenty of ire along the way.
Kinect 2.0 and Xbox One were, after all, meant to be married for life. The expensive sensor was bundled with the console from day one. Its functionality was deeply ingrained in the design of the system’s user interface, and a whole 10% of GPU resources were permanently devoted to it. Originally, Xbox One wasn’t even capable of booting up without a Kinect plugged in; it was an intrinsic and inseparable part of the console. In sickness and health, till death do they part.
Well, like so many relationships and marriages, it turned out that there were plenty of good reasons to break up long before the Grim Reaper raised a bony hand. Kinect has been at the root of many of Microsoft’s woes with Xbox One. It raised the price of the system, making the console $100 more expensive than the more technically impressive and well-liked PS4. It seemed to imply that Xbox One was a console aimed at casual gamers (with whom motion controls are now, fairly or unfairly, strongly associated) at the expense of the core gamers who made Xbox 360 successful. Moreover, in an age of actually rather justifiable paranoia about privacy, a camera in your living room that never turned off made plenty of people downright uncomfortable.
Worst of all, up to this point, Kinect just hasn’t justified its own existence. There aren’t any great games on the Xbox One that use Kinect extensively; there’s simply nothing there to make people think, “wow, this is something you couldn’t do on PS4 because it doesn’t have Kinect”. After 12 months of doggedly repeating the party line that Kinect was a great unique selling point for Xbox One, Microsoft’s decision to unbundle the peripheral from the console is a tacit admission that it wasn’t a selling point at all. Innovative hardware is meaningless if nobody builds must-have games to exploit the functionality.
It’s no coincidence, I think, that the decision to bring Kinect around the back of the woodshed and put a bullet in it was announced shortly after Phil Spencer took over Xbox. Spencer understands games in a way that his immediate predecessors did not; he would have an innate understanding of the fact that games sell consoles, and untapped potential in hardware is not exciting to consumers, it’s simply wasteful. An expensive peripheral that doesn’t drive great software isn’t a USP, it’s a ball and chain around the ankle of the console. It had to go.
It’s a little disingenuous, then, to see Spencer trying to claim that everything is fine in the land of Kinect. Speaking to GamesIndustry International at E3, he simultaneously acknowledged that Kinect was dragging the console down (noting that Kinect couldn’t succeed if Xbox One itself failed, which is a tacit admission that bundling Kinect with the console was risking a huge failure) while also claiming that plenty of consumers will buy the Kinect peripheral separately, and it’ll continue to be a big part of the Xbox One offering.
Not an unexpected claim, of course; but also patently not a true one. Kinect wasn’t supported strongly by developers even when it was bundled with every Xbox One. Now that it’s been dropped to the status of “expensive peripheral with no good games”, developer support will entirely dry up. Just like its predecessor on the Xbox 360, Xbox One Kinect is going to be relegated to lip-service support (“jump around to avoid enemy attacks, or just press B… Huh, you pressed B? Not up for jumping around? Surprising…”) and a handful of dancing or exercise titles. Not that there’s anything wrong with dancing or exercise titles, but you don’t get platform-defining tech from them; if you did, the world would have changed a hell of a lot more when Dance Dance Revolution mats came out for the PS1.
I don’t want to give Microsoft too much of a hard time for its decision with Kinect, not least because it’s the right decision. It gives them price parity with Sony and might help to fix some of the perception problems Xbox One faces. On the other hand, while Kinect was a failed USP – and thus deserved to be ditched – it was at least an attempt at a USP. With the right software and services backing it up, it could have given the Xbox One an offer different enough from Sony’s to be very interesting indeed – but building that software would have taken time, effort and attention. Spencer, with full visibility of the firm’s software pipeline, chose instead to amputate the limb and cauterise the wound. Painful, but mercifully quick; definitely a vote of no confidence in whatever Kinect software is still under development; possibly a move that will make Xbox One walk with a limp for the rest of its life.
What I hope the Xbox team recognises is that ditching Kinect isn’t enough – and hollow platitudes about how important the peripheral remains to the company’s strategy certainly aren’t enough either. What Xbox One needs is something to replace Kinect, a new USP; one that isn’t rubbish, this time. That USP could just be software, with Microsoft doubling down on its internal studios and building its relationships with third-parties to produce genuine exclusives (as opposed to timed-release DLC exclusives, which just look desperate and annoying no matter which platform is involved in them). It could be services, as the company attempts to leapfrog Sony and regain the lead Xbox Live once had over PSN’s services; what form that might take is tough to say, but there’s certainly still headway to be made in the provision of online services, and right now Microsoft lags behind, which makes this into an area brimming with opportunity. Most likely, a combination of both great games and great new services will be needed to make Xbox One attractive to consumers; to give it the USP that Kinect was supposed to be, but never was.
There’s an interesting comparison, of course, to be made with Nintendo’s difficulties with Wii U. I observed some time ago that both Microsoft and Nintendo had made the same basic error with their new consoles – they launched with expensive peripherals that boosted the cost of the console but had yet to show any dividends in terms of unique, must-have software. In Microsoft’s case, Kinect has now been ditched; losing the millstone, but with no sign yet of a new USP to replace it. Nintendo, however, has taken quite the opposite approach. Gamepad remains firmly bundled with the Wii U, and while software for the Gamepad still doesn’t impress, there’s obviously potential there; the short, cryptic videos of Miyamoto Shigeru working up gameplay demos using the pad which was shown at the end of Nintendo’s E3 broadcast was a statement of intent. Rather than ditching its white elephant, Nintendo is trying to figure out how to put it to work.
So, over the next year, we’re going to get to see how two diametrically opposed solutions to the same problem work out. Microsoft, making the latest of several U-turns, has gone back to square one and now needs to find a new selling point for the Xbox One. Nintendo has doubled down on the Gamepad, and needs to convince consumers of the worth of its innovation – not to mention the worth of the Wii U overall. Different challenges with similar requirements; they both need great games to prove their point. The consumer wins, in this situation, but it will be interesting to see which company, if either, can emerge victorious from these trials.