Dublin-based StatCounter pegged November’s mobile browser usage share — a tally of website pages viewed, and thus a measurement of online activity — at 20%, with personal computers accounting for the remaining 80%.
In the last 12 months, mobile’s global usage share grew by 7 percentage points, representing a 53% annual increase.
Mobile’s browsing growth is in part a side effect of a global slump in personal computer sales as customers instead purchase smartphones and tablets, and as a result, shift their time spent online from PCs to mobile. For the year,personal computer shipments will be more than 10% lower than the year before, when shipments contracted by a then-historic 4% compared to 2011.
Usage share gains for mobile have come at the expense of what StatCounter defines as “desktop,” a category that includes both desktop and notebook PCs, primary powered by Microsoft’s Windows, and Macs running Apple’s OS X. Desktop browser usage dropped 2 percentage points to 80% in the last three months, and fell 7 points in the last 12.
In September 2009, when Computerworld began tracking mobile browser usage — seven months before Apple started selling its first iPad — desktop controlled 98.9% of the usage total, according to StatCounter.
Net Applications, an Aliso Viejo, Calif.-based rival to StatCounter, also tracks desktop and mobile browsing, but uses a different methodology that essentially counts individual users, not online activity.
By Net Applications’ measurement, 13.2% of all unique visitors to its clients’ websites did so using a smartphone or tablet. Computerworld labels Net Applications’ numbers user share to differentiate them from StatCounter’s.
Personal computers accounted for 86.2% of the global browser user share for November by Net Applications’ tally.
Not surprisingly, browser makers have jumped on the mobile bandwagon. Nearly 60% of Apple’s November user share, as defined by Net Applications, was generated by the iOS version of Safari, for example, while 20% of Google’s user share came from its stock Android browser and the newer Chrome on that mobile operating system.
Meanwhile, Microsoft’s mobile version of Internet Explorer (IE) accounted for less than half of one percent of IE’s total user share.
Nokia has been warned by EU regulators not to “behave like a patent troll” following Microsoft’s acquisition of the company’s devices business.
Joaquín Almunia, European Commission VP in charge of competition said on Monday that while he had approved the $7.2bn sale of Nokia’s devices business to Microsoft, there is a danger that Nokia will take advantage of its vast patent portfolio.
Speaking at an event in Paris on Monday, Almunia said, “Since Nokia will retain its patent portfolio, some have claimed that the sale of the unit would give the company the incentive to extract higher returns from this portfolio.
“These claims fall outside the scope of our review. When we assess a merger, we look into the possible anti-competitive impact of the company resulting from it. We cannot consider what the seller will do. If Nokia were to take illegal advantage of its patents in the future, we will open an antitrust case – but I sincerely hope we will not have to.
“In other words, the claims we dismissed were that Nokia would be tempted to behave like a patent troll or – to use a more polite phrase – a patent assertion entity.
“You can rest assured that we are watching this space very carefully. DG competition will hold patent trolls to the same standards as any other patent holder,” he added.
Almunia’s concerns follow Nokia’s patent victory over HTC in UK court last week.
Last Tuesday a UK judge ruled that Nokia could assert a ban on the HTC One Mini and HTC One Max smartphones, ruling that HTC infringed Nokia’s EP0998024 patent, described as a “modular structure for a transmitter and a mobile station”.
IDC expects that anywhere from 25% to 30% of all the servers shipped next year will be delivered to cloud services providers.
In three years, 2017, nearly 45% of all the servers leaving manufacturers will be bought by cloud providers.
“What that means is a lot of people are buying SaaS,” said Frank Gens, referring to software-as-a-service. “A lot of capacity if shifting out of the enterprise into cloud service providers.”
The increased use of SaaS is a major reason for the market shift, but so is virtualization to increase server capacity. Data center consolidations are eliminating servers as well, along with the purchase of denser servers capable of handling larger loads.
For sure, IT managers are going to be managing physical servers for years to come. But, the number will be declining, based on market direction and the experience of IT managers.
Two years ago, when Mark Endry became the CIO and SVP of U.S. operations for Arcadis, a global consulting, design and engineering company, the firm was running its IT in-house.
“We really put a stop to that,” said Endry. Arcadis is moving to SaaS, either to add new services or substitute existing ones. An in-house system is no longer the default, he added.
“Our standard RFP for services says it must be SaaS,’ said Endry.
Arcadis has added Workday, a SaaS-based HR management system, replaced an in-house training management system with a SaaS system, and an in-house ADP HR system was replaced with a service. The company is also planning a move to Office 365, and will stop running its in-house Exchange and SharePoint servers.
As a result, in the last two years, Endry has kept the server count steady at 1,006 spread through three data centers. He estimates that without the efforts at virtualization, SaaS and other consolidations, they would have more 200 more physical servers.
Endry would like to consolidate the three data centers into one, and continue shifting to SaaS to avoid future maintenance costs, and also the need to customize and maintain software. SaaS can’t yet be used for everything, particularly ERP, but “my goal would be to really minimize the footprint of servers,” he said.
Similarly, Gerry McCartney, CIO of Purdue University is working to cut server use and switch more to SaaS.
The university’s West Lafayette, Ind., campus had some 65 data centers two years ago, many small. Data centers at Purdue are defined as any room with additional power and specialized heavy duty cooling equipment. They have closed at least 28 of them in the last 18 months.
The Purdue consolidation is the result of several broad directions: increased virtualization, use of higher density systems, and increase use of SaaS.
McCartney wants to limit the university’s server management role. “The only things that we are going to retain on campus is research and strategic support,” he said. That means that most, if not all, of the administrative functions may be moved off campus.
This shift to cloud-based providers is roiling the server market, and is expected to help send server revenue down 3.5% this year, according to IDC.
Gens says that one trend among users who buy servers is increasing interest in converged or integrated systems that combine server, storage, networking and software. They account now about for about 10% of the market, and are expected to make up 20% by 2020.
Meanwhile, the big cloud providers are heading in the opposite direction, and are increasingly looking for componentized systems they can assemble, Velcro-like, in their data centers. This has given rise to contract, or original design manufacturers (ODM), mostly overseas, who make these systems for cloud systems.
The company’s policies for shutting off sales to retailers and shipping licenses to OEMS (original equipment manufacturers) are posted on its site, which was recently updated to show that Windows 7′s “retail end of sales” date was Oct. 30.
The next deadline, marked as “End of sales for PCs with Windows preinstalled,” will be Oct. 30, 2014, less than a year away.
Microsoft’s practice, first defined in 2010, is to stop selling an older operating system in retail one year after the launch of its successor, and halt delivery of the previous Windows edition to OEMs two years after a new version launches. The company shipped Windows 8, Windows 7′s replacement, in October 2012.
As recently as late September, the last timeComputerworld cited the online resource, Microsoft had not filled in the deadlines for Windows 7. At the time, Computerworld said that the end-of-October dates were the most likely.
A check of Microsoft’s own online store showed that the company has pulled Windows 7 from those virtual shelves.
In practical terms, the end-of-retail-sales date has been an artificial and largely meaningless deadline, as online retailers have continued to sell packaged copies, sometimes for years, by restocking through distributors which squirreled away older editions.
Today, for example, Amazon.com had a plentiful supply of various versions of Windows 7 available to ship, as did technology specialist Newegg.com. The former also listed copies of Windows Vista and even Windows XP for sale through partners.
Microsoft also makes a special exception for retail sales, telling customers that between the first and second end-of-sale deadlines they can purchase Windows 7 from computer makers. “When the retail software product reaches its end of sales date, it can still be purchased through OEMs (the company that made your PC) until it reaches the end of sales date for PCs with Windows preinstalled,” the company’s website stated.
The firmer deadline is the second, the one for offering licenses to OEMs. According to Microsoft, it “will continue to allow OEMs to sell PCs preinstalled with the previous version for up to two years after the launch date of the new version” (emphasis added).
After that date, Microsoft shuts off the spigot, more or less, although OEMs, especially smaller “white box” builders, can and often do stockpile licenses prior to the cut-off.
But officially, the major PC vendors — like Dell, Hewlett-Packard and Lenovo — will discontinue most Windows 7 PC sales in October 2014, making Windows 8 and its follow-ups, including Windows 8.1, the default.
Even then, however, there are ways to circumvent the shut-down. Windows 8 Pro, the more expensive of the two public editions, includes “downgrade” rights that allow PC owners to legally install an older OS. OEMs and system builders can also use downgrade rights to sell a Windows 8- or Windows 8.1-licensed system, but factory-downgrade it to Windows 7 Professional before it ships.
Enterprises with volume license agreements are not at risk of losing access to Windows 7, as they are granted downgrade rights as part of those agreements. In other words, while Microsoft may try to stymie Windows 7 sales, the 2009 operating system will long remain a standard.
As of the end of November, approximately 46.6% of all personal computers ran Windows 7, according to Web measurement vendor Net Applications, a number that represented 51.3% of all the systems running Windows.
IBM is in the throes of developing software that will allow organizations to use multiple cloud storage services interchangeably, reducing dependence on any single cloud vendor and ensuring that data remains available even during service outages.
Although the software, called InterCloud Storage (ICStore), is still in development, IBM is inviting its customers to test it. Over time, the company will fold the software into its enterprise storage portfolio, where it can back up data to the cloud. The current test iteration requires an IBM Storewize storage system to operate.
ICStore was developed in response to customer inquiries, said Thomas Weigold, who leads the IBM storage systems research team in IBM’s Zurich, Switzerland, research facility, where the software was created. Customers are interested in cloud storage services but are worried about trusting data with third party providers, both in terms of security and the reliability of the service, he said.
The software provides a single interface that administrators can use to spread data across multiple cloud vendors. Administrators can specify which cloud providers to use through a point-and-click interface. Both file and block storage is supported, though not object storage. The software contains mechanisms for encrypting data so that it remains secure as it crosses the network and resides on the external storage services.
A number of software vendors offer similar cloud storage broker capabilities, all in various stages of completion, notably Red Hat’s DeltaCloud and Hewlett Packard’s Public Cloud.
ICStore is more “flexible,” than other approaches, said Alessandro Sorniotti, an IBM security and cloud system researcher who also worked on the project. “We give customers the ability to select what goes where, depending on the sensitivity and relevance of data,” he said. Customers can store one copy of their data on one provider and a backup copy on another provider.
ICStore supports a number of cloud storage providers, including IBM’s SoftLayer, Amazon S3 (Simple Storage Service), Rackspace, Microsoft Windows Azure and private instances of the OpenStack Swift storage service. More storage providers will be added as the software goes into production mode.
“Say, you are using SoftLayer and Amazon, and if Amazon suffers an outage, then the backup cloud provider kicks in and allows you to retrieve data,” from SoftLayer, Sorniotti said.
ICStore will also allow multiple copies of the software to work together within an enterprise, using a set of IBM patent-pending algorithms developed for data sharing. This ensures that the organization will not run into any upper limits on how much data can be stored.
IBM has about 1,400 patents that relate to cloud computing, according to the company.
The dismal numbers will not be welcomed at Microsoft, which sells the bulk of its Windows licenses to computer makers as they assemble new PCs.
According to IDC’s revised estimate, 2013 PC shipments will total 314 million, a 10.1% decline from last year’s 349 million.
The new forecast was the third reduction in 2013 expectations by IDC, which started the year thinking that the decline would be just 1.3%. With each revision, the research firm’s projections became gloomier, first in May when it predicted a 7.8% contraction, then again in August when its analysts said the decline would intensify to 9.7%.
If IDC’s latest prognostication is accurate, Asian factories will ship about the same number of PCs to distributors, retailers or OEMs as they did in 2009, two years before ”peak PC,” when PC shipments reached nearly 364 million before starting a 24-month-and-counting slump.
The downturn will continue through 2014, IDC maintained Monday, when PC shipments will fall another 3.8% to around 302 million — like the 10.1% drop this year, a larger decline than the August estimate — before recovering ever so slightly over the next several years. But for the foreseeable future — at least through 2017 — shipments will hover just north of 300 million, or about the number delivered in 2008.
“Beyond 2017, at this time we don’t have reason to think the market would take off in double-digit year-over-year growth,” said Jay Chou, one of the IDC analysts who works on the PC tracking team, in an email reply to questions.
The last time PC shipments climbed by double digits was in 2010, when year-over-year growth was a robust 13.7%.
Other than computer component suppliers and PC makers, Microsoft will be the company hit the hardest: Sales of its Windows operating system are almost entirely reliant on new PC sales. Last quarter, for example, Microsoft said that OEM-based Windows revenue declined 7% overall, nearly the same as the drop in PC shipments for the quarter measured by IDC.
Sony has promised to have “substantial” resupplies of the PlayStation 4 before the end of the year, but has given no indication as to what qualifies as substantial. Wedbush analyst Michael Pachter has stepped in to fill that information void, telling investors in a note this morning that he believes Sony is making PS4s at the rate of a million systems per month.
Pachter followed up on Sony’s announcement today that it had sold 2.1 million systems worldwide, saying that number fits well with previous estimates that Sony began manufacturing PS4s for retail on September 1, and that it faces a gap of up to three weeks from a system’s creation to the time it arrives on shelves.
“We expect Sony to continue to ship 1 million consoles per month, so as of the end of January, we believe Sony will have manufactured a cumulative 5 million consoles and will have shipped 4.25 – 4.5 million,” Pachter said. “We expect the 55 percent allocation to North America to continue through January, and then revert to a more normalized 40 percent of units once Sony launches in Japan and other countries. We think that Microsoft is on a similar production schedule, with similar allocations to North America.”
Pachter added that specialty retailer GameStop has been receiving roughly half of the systems shipped to North America, and that it will continue to take up that share of the allocations through December. In the New Year, Pachter expects the company’s share to be dialed back to a “more customary” 30 percent.
If the shipment projections are accurate, the PS4 would be more than holding up its part of publishers’ predictions that Sony and Microsoft would combine to ship 10 million units of their new systems by the end of March.
The program, dubbed “Student Advantage,” was unveiled in mid-October, when Microsoft promised that it would debut Dec. 1.
Educational institutions, whether K-12 school districts or those in higher education, that license Office Professional Plus 2013 or Office 365 ProPlus — the former is traditionally-licensed software while the latter is a subscription — can now also hand Office 365 ProPlus subscriptions to students, free of charge.
Schools and universities must have licensed Office for staff and faculty institution-wide, according to Microsoft, to be eligible for the student give-away. When students graduate, their Office 365 subscription expires.
Office 365 ProPlus includes rights to download and install copies of the newest Office desktop applications on up to five Windows PCs or Macs owned by the student, as well as rights to run the iPhone or Android editions of Office Mobile.
Students, faculty and staff at universities that do not equip employees with Office can instead pay a flat $80 for a four-year subscription to Office 365 University. That subscription program allows Office 2013 to be installed on up to two PCs or Macs, and Office Mobile on as many as two mobile devices.
With the release of Grand Theft Auto Online, Rockstar has taken its blockbuster franchise in an ambitious new direction. The multiplayer world, complete with in-game economy, certainly has many of the hallmarks of a Free-2-Play title, but could GTA Online actually make it as a standalone F2P game?
Given the seismic shift the games industry has already made towards F2P, no one would be surprised if Rockstar made this next step. However, there is a lot a stake and creating a successful F2P isn’t simply a case of throwing in some in-app purchases and giving a £40 game away for free.
F2P is already established as the dominant business model for mobile and PC games. Reasons for this include the prevalence of micro-transactions and because these platforms make it relatively easy for publishers and developers to integrate analytics and use that data to make informed real-time game design changes to keep players engaged and increase retention. The transition onto console has been a slower burn – designing successful F2P games requires an understanding and skill set which isn’t necessarily native to publishers with a long heritage in designing games to ship in a box.
“Many F2P console games have come up short, offering a poor tutorial and on boarding process, plus a monetisation structure that is much closer to a used car sales man than an enjoyable experience that puts the control in the users’ hands”
As a result, many F2P console games have come up short, offering a poor tutorial and on boarding process, plus a monetisation structure that is much closer to a used car sales man than an enjoyable experience that puts the control in the users’ hands. However, the data capabilities of the Xbox One and PS4 means that F2P on console finally looks set to take off, with an impressive list of F2P titles already set for release including Little Big Planet, Planetside 2 and War Thunder.
To better understand the potential of console transition we thought we’d take a theoretical look at GTA Online as a standalone F2P title.
Our in-house design team applied GamesAnalytics’ proprietary evidenced based research methodology to benchmark key aspects of its game design against best practice F2P game design from over 80 titles.
Focusing on six main categories including Monetisation, Retention, Engagement and Virality and analysing 50 key criteria the team found unsurprisingly that GTA Online surpassed the best in genre score for Retention, Game Mechanics, Engagement and Game Overview, clearly reflecting the high quality of the game. However, if GTA Online was going F2P it would need to look at mechanics around Monetisation and Virality.
Based on these data findings, here are five recommendations to improve the F2P potential of GTA Online:
1. Improve the currency structure
Currently GTA Online has a single currency, this is fine when the game is not relying on this currency as a part of the monetisation, but for a true F2P game you would want to extend this to provide greater flexibility. Adding in a premium currency is generally the way of giving games more flexibility in delivering the F2P mechanic. Making the currency a part of the world so it feels natural is vital in making sure the monetisation doesn’t jar with the game surrounding.
There are a number of ways that people are encouraged to spend money both in the real and the virtual world. Especially for a game like GTA, it is vital that it feels natural and intuitive. Discounts and bundles are obvious incentives for getting people to invest in in-game economies, but rental and test drives are also a good way of letting players get a taste for the high life and incentivising them to keep grinding or splash the cash.
These ‘try before you buy’ mechanics are good ways of easing players onto the paying path while keeping the barrier low and the incentive high.
Giving players the ability to buy luxury vanity items using a premium currency is exactly the way you would expect Rockstar to monetise its players. The game has always been about getting rich quick and showing off the proceeds of your crimes. This is not about honest hard slog, so it’s fitting that players should be given a quick route to the high life through whatever means at their disposal. A successfully free-to-play GTA Online should also include consumables: things that the player will spend money on that give them a short term advantage or simply let them show off.
2. Introduce a VIP structure to fast track progress and reward members
“This is not about honest hard slog, so it’s fitting that players should be given a quick route to the high life through whatever means at their disposal”
There is no game that is more about being king of the hill than GTA, so a full VIP structure is essential. Imagine the retention value of being the only player that can drive around the hills of Los Santos in a purple Ferrari with gold trim.
VIP membership could offer:
Rank Point/Job Point boosts
Monthly $/Gold allowance
Access to premium clothes, vehicle paint jobs and vanity items
Special members store accessible through the iFruit with daily/weekly member offers
3. Utilise no lose gambling
We’ve already touched on the repetition which exists within GTA Online – completing mission after mission to build up your cash and accessory stockpiles. One alternative to a life of hard graft and long hours is gambling, an easy to implement F2P mechanic which fits with Rockstar’s vision and GTA’s ‘feel’. Mechanics such as magic boxes offer players a no lose gamble: spending some money guarantees something cool. There can be no better way of taking the easy route than making sure the odds are stacked.
4. Introduce a trading mechanism to help increase community aspects
If gambling isn’t your thing then a bit of business on the side can help you make it to the top. Trading in F2P games inevitably encourages a black market, but unlike other F2P games where there is a clear split between grind currency and premium currency, GTA Online F2P should allow this secondary market to exist.
Letting players trade whatever they want will encourage a free-form economy that will favour the adventurous, the ruthless and the downright corrupt. The mechanic will drive the economy and build player loyalty.
Players will buy and sell from each other, and using rare items it is also possible to use data analytics to monitor the price elasticity of items as players bid for certain items. Items can trade for 100x their original value in F2P games and can be useful to define pricing as well as delivering value and incentivising players.
5. Build in reward mechanics for better social sharing
GTA is such a well-known franchise, it pretty much sells itself. However, giving players rewards for inviting other players to join is a well-structured mechanism and can help to double your player base for little or no cost.
Giving players an incentive to invite is key, there would be nothing better than being able to pimp your friends by taking a cut of the money they spend as their due deserves for getting them in to the game in the first place.
With the PlayStation 4 and Xbox One on the scene, the next console generation has finally begun. While a new generation usually brings the promise of more graphical power, great graphics are only part of the gaming equation. What will these new consoles allow developers to do creatively?
In its last two titles, Dear Esther and Amnesia: A Machine for Pigs, independent developer The Chinese Room focused on pushing the first-person game away from the shooting mechanics that usually dominate. The studio’s next title, Everybody’s Gone to the Rapture, is coming to PlayStation 4 with some help from Sony Computer Entertainment. For The Chinese Room, next-gen helps their creative juices just by being easier to work with.
“The blunt reality is that easier production equals more creative freedom and opportunity”
The Chinese Room creative director Dan Pinchbeck
“I think the major thing, from the perspective of actually building games, is less for us about the power – that’s brilliant of course, and having significantly higher budgets makes a big difference – but it’s more about the ease of working with PS4,” The Chinese Room creative director Dan Pinchbeck told GamesIndustry International. “So far, it’s just been a dream bit of kit to work with. We’ve got the advantage of working with CryEngine, another great piece of tech of course, but even then it’s been remarkably smooth to get things up and running quickly. That’s worth its weight in gold from a production standpoint, and the blunt reality is that easier production equals more creative freedom and opportunity.”
According to Braid creator Jonathan Blow, aiming for a single, next-generation set of specifications allowed the team behind The Witness to settle on a single visual style for the game. That title is also heading to PlayStation 4 in 2014.
“Creatively, we build and we assume that we have enough power in rendering,” explained Blow. “When we were planning the look of the island, we had a couple of choices. Do we target the PlayStation/Xbox 360 class of machines or do we move to next-generation consoles? Because development was going long, we decided we were going to be in the next console cycle anyways.”
“If we’d ended up on lower-spec machines, it wouldn’t just be that [The Witness] would have lower-poly models. It would’ve affected the style all over the place; the style of the game would’ve been different. I don’t think it would’ve been as nice.”
For Ghost Games, the new shepherd of EA’s Need for Speed franchise, next-gen does come down to “more power”. This power – and the new set of expectations that come with it – frees the team to think outside of the box when it comes to gameplay innovation. A new generation allows developers to think about what’s possible instead of wringing more blood from a worn-out stone.
“It makes us think differently. Every time there is a transition we start thinking about what would be possible.”
Ghost Games executive producer Marcus Nilsson
“It makes us think differently,” said Ghost Games executive producer Marcus Nilsson. “Every time there is a transition we start thinking about what would be possible. We are not locked into old boundaries anymore. From that we get great innovations like AllDrive. The systems are giving us power to do more, more AI, more particles etc. Just turning everything up really.”
Nilsson also noted that the PlayStation 4 and Xbox One provide other options, including social networking features and second-screen modes, which “opens up creative solutions around cross-platform play.”
One of the highlights of Sony’s launch window slate for the PlayStation 4 is Infamous: Second Son from Sucker Punch. While the game simply looks amazing, improved graphics and horsepower also mean the human element of Infamous can be pushed forward.
“[Infamous: Second Son] is all performance captured,” Sucker Punch co-founder and director of development Chris Zimmerman told us. “We actually use all kinds of cameras, with dots on the actors’ faces getting mapped through 3D scans. As you see people in the game, you’ll see their faces move in realistic ways.”
“See the wrinkles appear?” Zimmerman pointed out in a demo of Second Son, “we are actually animating 15,000 vertexes in his face 30 times a second to get that to happen that well. The thing that really matters for a game like this is you can actually see the characters act. You can read his face. You have a million years of human evolution that’s trained you to read people expressions and their faces; now we can bring that to you. That is the expression that these actors had when they did the scene. If we show you the video of their faces and then show you the in-game feature, you’ll be like ‘that’s the expression that guy had on.’ It seems dumb, but it matters.”
In some case though, the PlayStation 4 and Xbox One will just allow what previous generations have allowed: more, better-looking things onscreen in our games. And even that can improve the player’s experience. For BioWare Edmonton and Montreal general manager Aaryn Flynn, next-gen means a more immersive and interactive game world for BioWare fans.
“With the next generation of consoles, the most important question we ask ourselves is ‘How does this help our storytelling?’ As we’ve worked with them, we think it starts with a density and dynamism that wasn’t possible previously,” said Flynn. “‘Density’ in the sense of more interesting things on the screen that help immerse you in the game world, and ‘dynamism’ in that they are more interactive than ever before.”
The generation has only just begun. Developers still have plenty of time to learn how to make the PlayStation 4 and Xbox One dance and sing. What’s been shown so far is pretty damn good, so let’s sit back and enjoy the future.
Take-Two Interactive Software has repurchased all of the Icahn Group’s stock, a deal worth $203.5 million and involving 12.02 million shares.
“This share repurchase reflects our confidence in the Company’s outlook for record results in fiscal 2014 and continued Non-GAAP profitability every year for the foreseeable future,” said Take-Two CEO Strauss Zelnick.
“With our ample cash and strong expected cash flow, we are able to pursue a variety of investment opportunities, including repurchasing our Company’s stock. On behalf of our board and management team, I would like to thank Brett, James and Sung for their support, dedication and service to our organisation. They leave Take-Two better positioned than ever for continued success.”
The move was funded by cash and cash equivalents on hand and Take-Two explained the move is “part of an ongoing strategy to buy back its shares.”
Take-Two and Icahn gave no reason for the sale of the shares, but as previously agreed, Icahn’s Brett Icahn, Jim Nelson, and SungHwan Cho and have resigned from the Take-Two board.
The Icahn Group is overseen by activist investor Carl Icahn and this year Forbes named him one of its 40 Highest-Earning hedge fund managers. In the past he’s tried to acquire Dell, Marvel Comics and owns a ten percent stake in Netflix.
[UPDATE]: Investors did not greet the news warmly, as Take-Two shares traded at twice their average volume and ended the trading day down 5.49 percent to $16.
The phone is a variant, though not an outright successor, of the Lumia 520, and helps Nokia offer Windows Phone at a more accessible price to a larger number of users, a spokeswoman said via email.
The smartphone will go on sale before the end of the year in China, Vietnam, Hong Kong, Cambodia, Singapore and Russia. In China, it is priced at 1099 yuan ($180) before taxes and subsidies. It will then go on sale in Australia, New Zealand, Ukraine, Khazakstan and parts of Africa during the first quarter of next year, according to Nokia.
During the third quarter, Lumia sales increased by 19 percent quarter-on-quarter to 8.8 million units, reflecting strong demand particularly for the Lumia 520, Nokia said. The Lumia 525 and the expanded distribution it brings, then, is important to Nokia.
Other than 1GB of RAM, rather than 512MB, the specs of the Lumia 525 are identical to what users get with the Lumia 520. That includes a 4-inch screen with a resolution of 800 by 480 pixels, a 5-pixel camera and dual-core 1GHz processor. There is also 8GB of integrated memory and a microSD card slot.
The market for sub-$200 smartphones is at a crossroads, mostly thanks to Google’s efforts. The recently announced Moto G from Google-owned Motorola Mobility costs as much as the Lumia 525, but is powered by a 1.2GHz quad-core processor and has a 4.5-inch 720p screen.
Even though the Lumia 520 has helped increase the popularity of Windows Phone, Nokia and Microsoft can’t afford to rest. Their main priority should now be to bring down the cost of Windows Phones to below $100 without a contract, said Pete Cunningham, principal analyst at Canalys.
Nokia shareholders last week voted to approve Microsoft’s acquisition of “substantially all” of the company’s Devices & Services business. The deal is expected to close during the first quarter of next year.
It is not known for which applications Apple aims to use the PrimeSense technology or the price it has paid for the Tel Aviv, Israel, company. Apple spokeswoman Kristin Huguet emailed the company’s standard statement after an acquisition.
“Apple buys smaller technology companies from time to time, and we generally do not discuss our purpose or plans,” she wrote.
PrimeSense technology was used to power the Kinect motion sensing input device in the original Xbox from Microsoft.
The Calcalist newspaper in Israel reported on the deal about a week ago, and said Apple would pay $345 million for the company.
PrimeSense said earlier this month that its sensor was used by 3-D printing company 3D Systems for its new 3-D scanner called Sense.
The company’s sensors have applications in other areas, ranging from retail to healthcare, which suggest that Apple has a number of alternatives for deployment of the technology in its own devices. Its Capri sensor is a small-size device designed specifically for integration with mobile phones, TVs, tablets and PCs.
PrimeSense was founded in 2005 and has operated as a fabless semiconductor company. Its technology already powers over 24 million devices around the world, enabling natural interaction between people and devices and between devices and their surroundings, the company said on its website.
AMD has announced that its proprietary Mantle graphics API is attracting more interest as some big names sign up. Rebellion Entertainment has entered the game with its Asura engine and officially adopted Mantle for their upcoming Sniper Elite V3.
It looks like the first title that will be supported by Mantle will be Sniper Elite 3. So far no one is saying what advantage there will be for Asura to run on Mantle, it seems likely that it will give boosts in performance as well as enhanced graphics quality. Chris Kingsley chief technology officer and co-founder of Rebellion Entertainment said in a press release that his studio was pushing technology as far as it could.
“We are excited about the possibilities that Mantle brings to PC gaming and the industry as a whole. We believe that supporting Mantle will enable us to stay on the bleeding edge of PC gaming and ensure that we don’t leave any performance on the table when it comes to offering gamers amazing experiences,” he said.
Mantle, a cross-platform application programming interface for windows designed specifically for graphics processing units based on graphics core next (GCN) architecture, presenting a deeper level of hardware optimisation. Mantle is supposed to bypass bottlenecks in modern PC/API architectures and enables nine times more draw calls per second than DirectX and OpenGL thanks to lower CPU overhead, AMD claims.
In less than a week, both the PlayStation 4 and the Xbox One will have launched in the world’s most lucrative console markets. If you had to plant a flag to mark the start of a new generation, you’d do well to find a more appropriate spot.
Well, praise be. Microsoft was justifiably lambasted for its early direction and messaging, but the ill-feeling created by that string of fumbled choices was untroubled by all subsequent attempts to retrench and appease. Since then, Sony has walked a blessed path; not exactly free of mistakes and questionable decisions, but bolstered by the knowledge that the scrutiny of both the press and the forum-dwelling public was focused elsewhere. Perhaps now hard numbers can replace the speculation and supposition. Perhaps now we will be able to see the true measure of the policy reversals and resolution deficiencies.
There is, after all, a bigger picture to consider. It can be fun to get lost in the manufactured rivalry of a console war, but both Sony and Microsoft understand that this generation must be about more than the chips in their little – and not so little – black boxes. Gaming has never been more popular, or more culturally prevalent, but a lot has changed since the console companies last played this billion-dollar crapshoot.
So much of the industry’s recent growth has happened away from the traditional world of AAA blockbusters, where audience gains have been handily outmatched by soaring expenses. The early debate may be dominated by familiar concerns over framerates and dots-per-inch, but the terms of this generation will be different from the last. Sony’s mistakes with the PlayStation 3′s esoteric architecture didn’t go unnoticed by either party, and it shows in the hardware.
“The last generation created a bunch of artificial work. You had to do things in a very different way and, in the end, it wasn’t like you got a massive amount of technical performance out of it. It was time that didn’t go into making the games better,” says Nick Button-Brown, general manager at Crytek.
“I like the fact that, this time, it’s all built on architecture that we can understand. If you look at the PS3, people only started to get the most out of the system at the end of the cycle, but that’s five or six years on. That’s terrible. I want to start getting at the most nearer the start. That’s the advantage with simpler and more similar architecture – we’ll be seeing much more from the first games out.”
Crytek is the studio responsible for Ryse: Son of Rome, a standard-bearer for the Xbox One. Button-Brown admits that, while setting a visual benchmark was the not the main objective of the project, it was a side-mission of sorts, and the pride with which he describes Crytek’s work indicates that he considers the mission very much accomplished. The smoke, the fire, the beads of sweat running down the lined, wrinkled faces of the characters, the way those characters plant their feet; these are, he boldly claims, new heights for console gaming.
“I do think we’re going to set a visual benchmark; it’s going to be very difficult for anyone to beat our visual performance. We put a lot of work into facial, a lot of work into animation, just making it all feel much more real,” he says. “Is there further we can go? Definitely. We have some high-end cinema tools that don’t run in real-time even on high-end PCs now – we’re talking one, two frames per second. Eventually, we’ll be able to run those in real-time.”
In the absence of stiff competition, Ryse has as strong a claim to the pinnacle of visual excellence as any other launch title, but Button-Brown understands that such victories are short-lived. After all, in blockbuster development, a better looking game is always just over the next hump of the release schedule. Crytek will no doubt persist in that direction, but the impact of this generation’s visual performance will not be as profound as the jump to HD, and the differences between the PlayStation 4 and Xbox One hardware will matter less still. This time, exactly what constitutes the “cutting-edge” will be harder to pin down.
“There’s always more we can do [visually], but I do think you reach a point where, for the user, they feel that it looks as good as it’s going to get, and they’re not going to see a huge difference between [the consoles],” he says. “For us, the leap is about the details. It’s not about one or two big things. It’s about being able to do small things much better: more stuff on-screen, more AI, more physics.”
It would be churlish to ignore the fact that Ryse has failed to stir the imaginations of the critics, eliciting unanimous praise for its visual detail and precious little else. My interview with Button-Brown was conducted prior to the publication of those reviews, but even then he was cognisant of the gamble creating a launch title for this particular generation represented. In the past, there were obvious, powerful hooks for developers to work with – the advent of 3D graphics and HD graphics, the availability of a hard-drive, online play as a usable tool – but this generation is more diffuse.
“Going into launch, I don’t know whether we’ve spent the resources in the right place. I don’t know whether we’ve focused our efforts in the right place. I’m only going to know that when people get to buy it,” he says.
“We talk to publishers a lot, and one of the most painful questions is, ‘Tell me what next gen gameplay is gonna be?’ It’s not something you can define. Nobody delivers gameplay because it’s next gen; you’re delivering gameplay because it’s good. That’s one of the things we struggled with [in Ryse's E3 demo]. We showed a cut-down version of the gameplay and we were criticised for that. We didn’t see that coming. We were too close, and we cut it down further than people wanted to see.”
However, while the criticisms leveled at Ryse may well be justified, a part of the problem may be that, at the dawn of a new generation, nobody is quite sure what they want to see. They only know what has gone before, and will resist any attempt to smuggle what are regarded as the bad habits of the past into the $400 future. Ryse signalled its intent with combat that closely resembled a QTE. That was never likely to go down well with the press, who instantly suspected Crytek of trying to coast on graphics alone.
“The generational leap is not as clear cut now,” Button-Brown admits. “Maybe in a year’s time we’ll have a better understanding of what the leap really is this time, as people start playing things and we start to see what really matters. I think with hindsight we’ll be able to look back and see, ‘yeah, that was the big step.’”
Perhaps it’s naive to expect more clarity on what might define this generation from developers working so closely with the hardware, but in any case, that would be no slight against Crytek. Apart from Kinect 2.0 on the Xbox One – which may finally have the hardware to honour some of the promises made four years ago – in terms of new game experiences there isn’t an obvious wellspring for original ideas on either console. Indeed, the most obvious differences in the early days of the generation are likely to be found in the service layer: social integration, voice control, multimedia functions, and other areas often dismissed as secondary to the tasks for a which a console should be designed.
This is one of the key ideas I took away from my conversation with Michiel van de Leeuw, technical director at Guerrilla Games. Essentially, the moment-to-moment experience of established genres will remain the same, but innovation will arise from, “a deeper, underlying layer.”
“It’s not like we have that one gizmo to make everything really good or different, but the way that the operating system and the games work together, it’s much more of a marriage of those two things,” says van de Leeuw. “It’s a much more holistic approach to the console. How do people use it? How do people want to use it? How do we make sure that every hour of using your console is an hour spent having fun? And almost nothing is more fun than sharing experiences with other people. It’s all integrated, and under the hood there’s a lot of complexity to make sure that you don’t notice it. A lot of magic is necessary to make it look simple.”
As a subsidiary of Sony Computer Entertainment and the developer of a key launch title, Guerrilla Games was part of the inner circle that formed around Mark Cerny during the PlayStation 4′s creation. The most taxing problem, the subject of the most meetings and debates, was how to improve the experience around and outside of the games – streaming, background downloads, switching between applications, and so on. For Cerny, “immediacy” was a watchword.
When it came to the fundamental hardware architecture, however, van de Leeuw says that the directive was relatively simple: “give us more…as many graphical gizmos as you can afford.” The extra power was a given rather than the main focus.
“I like to ask people about what the next generation should be about, and everyone says, ‘it has to be a photo-realistic, and everything has to be more. There has to be thousands of people and blah, blah, blah.’ But why is that fun? If you have 1000 people around you, do you feel more attached to them than if you just had one or two? Technology does not immediately result in a more satisfying experience. The first layer that people think about is better graphics, more of everything. And then they think, ‘What do I need more of? I don’t know, really, but there must be more of something‘.”
There it is again: the great, unknowable ‘something’ that, nevertheless, everyone is waiting impatiently to see. Killzone: Shadow Fall has fared better with the critics than Ryse, but the expectation of clear, identifiable progress is used as ammunition in the majority of its negative reviews. For van de Leeuw – who also spoke to me prior to the publication of his game’s review scores – launch titles are not necessarily supposed to alter the way people look at games as a whole, but he also makes no secret of the increasing complexity of productions on the scale of Killzone. More power can make life easier in some respects, but certainly not all.
“You have to focus on 1000 things at the same time, and at the same time as that you need to grow your company, because you need more people to focus on all of those things. That, by itself, becomes a problem, because it becomes difficult to manage the complexity brought by all of those extra people. It’s very challenging.
“We’re working with first-person shooters, and look at how incredibly complex these things are. You’re not just selling one game: you’re selling a movie, and a game, and a multiplayer experience that needs to fit with eSports, and it’s all packaged together. And it all has to be good, because the competition is incredibly, and increasingly, good.”
Indeed, it is the progress evident in individual games, rather than the super-charged hardware, that truly plants a gauntlet at the feet of the industry’s developers. Umpteen gigabytes of GDDR5 memory is not nearly as powerful a motivator to do better work as the release of, say, The Last of Us or The Walking Dead. New hardware may give developers more options, but the real skill lies in making the right decisions. When there is enough of an installed-base to offer a safety net, van de Leeuw says, the industry’s most talented developers will start taking creative risks, and new genres will emerge.
But will that innovation be exclusive to a specific platform? When a consumer makes their decision to buy either a PlayStation 4 or an Xbox One, is the potential for new ideas a relevant factor? From the developer side, ven de Leeuw says, the differences in the hardware of this generation may not offer the sort of rewards that Naughty Dog and Guerrilla wrung out of the PlayStation 3′s distinctive Cell processor. Today, with teams spiralling into the hundreds, budgets on the rise and a dozen other platforms to consider, the emphasis is on efficient tools and flexible engines. Microsoft and Sony made a conscious choice to be more similar than different in terms of architecture, with developers’ needs firmly in mind.
“Being able to squeeze more out of the console by really focusing on it allowed us, in the past, to create experiences that couldn’t be done, or would be much harder to do if we had to split our focus. But I think we’re coming to the day where the amount of effort you have to put in to do that, it’s questionable whether it’s worth it.
“Our games are getting so big. We try to make our experiences richer for gamers, but at some point… there are pros and cons. Sometimes we wished that things were easier. The [PlayStation 3] was difficult to program for, but I still sometimes I miss it because it was also very powerful. You could do a lot of stuff that’s still very difficult to replicate, but the time for bespoke architectures is slowly going away.
“If you look back, raw assembly and raw power were what enabled new experiences. Nowadays, experiences are defined or limited by how efficient our toolsets are, how smooth our workflow is, how quickly we can develop, and how much time we have to spend on mundane distractions… Bespoke architecture allows you to do cool and crazy stuff, and from a technical point-of-view I’m still in love with that sort of thing, but I have a 230-person studio that wants to make a killer title.”
Despite what many executives have claimed in calls to their investors, both van de Leeuw and Button-Brown either strongly imply or directly confirm that the cost of making those “killer titles” will rise this generation – not to the same degree as they did with the Xbox 360 and PS3, perhaps, but certainly beyond the already precarious conditions that exist today. While we pore over screenshot comparisons, declaring winners and losers over slight differences in observable visual performance, it’s worth considering what any third-party would actually stand to gain from making one version of a game significantly better than another. Indeed, at companies like Epic, EA and Crytek, the emphasis has been on creating cost-saving tools that work seamlessly across all platforms, effectively glossing over aspects of the hardware that could lead to substantial gains in performance. First-party developers will still pursue that, of course, but, according to Button-Brown, for everyone else the base-level of AAA acceptability now sits at a daunting height on both platforms.
“If anything is just okay, it’s now terrible. ‘Solid’ is a failure. You now have to be so good,” he says. “The teams are getting larger and the risks are getting higher. We’re trying to do a lot of procedural stuff in this next generation to keep costs under control. It’s one of the ways we’re trying to keep that down, but it’s still a cost increase. Each asset needs to be so much better, so much more defined, than it was in the previous generation. No amount of procedural is going to change the fact that your underlying asset just has to be that much better.”
All of that hard-scrabble at the top end of the industry – essentially, fewer companies using more resources to create and market a smaller number of increasingly large games – will have a clear upside for independent developers. Indeed, right now, the beneficial ramifications of Sony’s decision to court indies as early as possible is arguably the most significant difference between the PlayStation 4 and the Xbox One. It always felt like a smart move, and that feeling will be further justified as the paucity of $60 blockbuster releases becomes more apparent.
Microsoft’s early digital strategies and the Xbox One’s evidently underpowered hardware may have monopolised the headlines, but Oddworld Inhabitants’ Lorne Lanning believes that it’s Microsoft’s belated effort to secure the diverse, free-flow of content from the indie sector that has truly given Sony the advantage. That reluctance to open up the Xbox platform, he argues, is tied to a big-business mentality that no longer works in a connected entertainment medium – the very same mentality that led to the unanimously derided online check-ins and multimedia focus that dominated the Xbox One’s early messaging.
“ID@Xbox was a bittersweet victory,” Lanning says. “If you have your ear to the ground today, you could see that those policies were going to blow up in its face, particularly when you see what [Sony] was doing. That was an old way of thinking, a way of thinking that was all about control. It’s a trickle down from being a monopoly. There’s a reason there was a class-action suit [against Microsoft]. There’s a reason there was an SEC, antitrust thing. There’s a very good reason for that. They wanted to control everything. The people who made those policies were still thinking very much in that way, and it blew up in their faces.”
For Lanning, this will be a generation defined by consumers getting what they want, rather than what they’re given. The generation where consumers wrest control of gaming back from the companies that have controlled it for so long – platform holders, publishers, retailers – and seek satisfaction from the most agile creative forces. There may be some lingering resistance from those with vested interests in established models, but Lanning believes any company seeking to stand in the way of this intractable change is unlikely to emerge with much credit. There will be more products offering a wider variety of experiences than on any previous generation, with price-points to suit every wallet. The lines of communication are wide open. There is nowhere left to hide.
“As people are becoming more informed and more connected, the shenanigans are becoming more transparent. And with that, what we’ll get is more diversity,” Lanning says. “The industry made up of five publishers really isn’t that long ago, and now what’s going on? How many self-publishing indies are there that can get a 1.5x return on each game and keep building? Maybe they can’t grow and be 500 people by the next year, but they can add 5 more by the next year.”
I mention the prevailing fear that the marketplaces on the Xbox One and PlayStation 4 will become too crowded – that by making consoles a more accessible place for independent developers, they will lose the focus that created huge successes like Castle Crashers, Super Meat Boy and Braid. For Lanning, it’s a worthwhile trade, and one of the most important ways that indies need to “grow up” to take advantage of the incredible opportunity this generation represents. The Battlefields and the Assassin’s Creeds will continue to exist and thrive, but the average consumer knows that already. What they don’t know about are games like Octodad, Below and Everybody’s Gone to the Rapture, and more fool the studio who leaves it up to Microsoft or Sony to raise their profile.
“If we sell a game now for $10, we get $7 on digital networks. Once upon a time, we weren’t even getting $7 on a $60 game,” Lanning says. “It’s a whole different thing, but you have to bring your own visibility. That’s your responsibility. Beyond just designing the game, we have to design how to build the relationship with our audience. People know that they want the GTA and the Call of Duty, and they’re gonna be on both systems. But they also want the surprises, and they want to experiment with those surprises at below the $60 price range. The audience always wants more choice.
“The biggest earners are gonna be the big AAA titles, because they have the $100 million marketing campaigns. You can’t compete with that. But in the years to come, the big properties at E3, the $100 million properties, they will have started off in the indie space. They’re gonna innovate cheaper, faster and more with their audience right away. That’s a guarantee.”