Games publisher EA believes things will turn around for the company next year. This year has been pretty unpleasant for the company after its trusted DRM sunk its flagship SimCity release.
But Electronic Arts seems to think that is all behind it and has forecast fiscal 2014 earnings above Wall Street’s expectations. EA has been cutting staff and reorganizing studios in recent months to embrace new game platforms. It is preparing a new batch of games including the latest installment of its “Battlefield” shooter game franchise.
Digital revenue, from mobile games, online offerings and other newer sales channels, rose 45 percent year-over-year to $618 million, larger than EA’s packaged goods business in the fourth quarter ended on March 31. It thinks that consumers have held back from buying hardware and software as they await new versions of Sony’s PlayStation and Microsoft Xbox expected later this year.
The video game maker forecast revenue of $4 billion, in line with Wall Street’s expectations. Weakness in the packaged games market dented revenue, but EA recognized $120 million of deferred payments from its “Battlefield Premium” service in the fourth quarter.
For the latest quarter, total revenue declined to $1.2 billion from $1.37 billion a year ago. Adjusted revenue rose 6.4 percent to $1.04 billion over the same period, barely beating analysts’ average estimate of $1.03 billion.
Net income fell to $323 million from $400 million last year.
It appears that the Ouya is going to be a bit delayed.
This is good news though, as it is being delayed because the console developers have more cash to spend on it, $15m more to be precise.
Ouya already raised around $7m on Kickstarter, and now, when it should be taking its last steps towards completion, it has had almost twice as much more injected into it by lovely venture capitalists.
We were expecting the console in early June, but that has slid back to 25 June. The time and money will in part be used to solve an issue with sticky buttons, something that usually only happens once consumers have taken some hardware home with them.
The money comes from venture capital firms and other companies including Kleiner Perkins Caufield & Byers (KPCB), Nvidia, Shasta Ventures, and Occam Partners. KPCB’s general partner Bing Gordon will join the Ouya board of directors as a result.
“We want Ouya to be here for a long time to come,” said Julie Uhrman, Ouya founder and CEO.
“The message is clear: people want Ouya. We first heard this from Kickstarter backers who provided more than $8 million to help us build Ouya, then from over 12,000 developers who have registered to make an Ouya game, next from retailers who are carrying Ouya online and soon on store shelves, and now from top pioneering investors.”
Gordon is in charge of digital investments at KPCB and is a veteran of the games industry, having started at Electronic Arts in 1982.
“Ouya’s open source platform creates a new world of opportunity for established and emerging independent game creators and gamers alike,” he said.
“There are some types of games that can only be experienced on a TV, and Ouya is squarely focused on bringing back the living room gaming experience. Ouya will allow game developers to unleash their most creative ideas and satisfy gamers craving a new kind of experience.”
Ouya consoles should start arriving in living rooms on 25 June. If you want one, you are going to have to come up with around $100 dollars, plus another $50 dollars if you want two controllers.
As anyone who has accidentally walked into a room full of children can tell you, they’re good at asking the kinds of questions that just keep drilling down. “Why is the sky blue? So why does blue light get scattered more? Then why is the sky red at sunset? Where are you going?”
And although I don’t recommend it, if you were to sit one of these little buggers down with a quarterly earnings reports from EA or Activision, they might soon start asking “Why are violent video games so much more popular than other games?” It’s a tricky question to answer without falling down the why hole. Because shooting stuff is fun. Why is it fun? Because people like military themes where they can be the hero. Okay, but why is that? Because players like feeling ridiculously powerful and enormous guns let them do that. But why is that appealing? Why, why, why?
Well, some psychologists are trying to tease apart the reasons why violence sells without throwing their hands up and shouting “Just because! And I’m not even your real dad!” Researchers Scott Rigby and Richard Ryan describe how they think that the design of violent games – especially shooters – naturally does a pretty good job of satisfying some very basic psychological needs. But not in the way you may be thinking.
In their book, Glued to Games: How Video Games Draw Us In and Hold Us Spellbound, Rigby and Ryan describe “self-determination theory,” a fairly well established framework that aims to describe why people pursue certain voluntary activities. In part, self-determination theory says that people are motivated to engage in activities to the extent that they satisfy three psychological needs:
- 1. Competence – progressing in skill and power.
- 2. Autonomy – being able to choose from multiple, meaningful options.
- 3. Relatedness – feeling important to others.
What does this have to do with violent shooters? Rigby, Ryan, and their colleagues argue that many of the design principles of good shooters also happen to follow well worn paths to satisfying these three psychological needs. Let’s take a closer look.
Competence is communicated by immediate and unambiguous positive feedback in response to your actions – you see opponents stagger, see blood fly off them, and ultimately see them collapse. The beloved headshot is particularly effective in this regard. Scott Rigby notes, “I’ll often put up a slide with a great screenshot of a headshot, and it always elicits smiles. The smiles here aren’t because everyone is sadistic – they are because this is a moment of mastery satisfaction that all gamers can relate to. The blood may not be the value component, but really is just a traditional way dense informational feedback on mastery is provided.” Information about competence in shooters is also thrown at you in the form of scoreboards, rankings, weapon unlocks, and eventually the outcome of every (relatively short) match.
Autonomy, the second motivator in self-determination theory, is also well served by the design of most popular shooters. Having the option to choose many different paths through a level satisfies autonomy, as does choosing between different classes, different loadouts, or different tactics. In a lot of games you can even choose between different modes, modifiers, or maps, allowing you to satisfy the need to play a game how you please. And if that’s not enough, custom character or weapon skins or models also fit in here.
Finally, relatedness is most obviously important in multiplayer games where you can feel like part of a successful (or, perhaps more likely of pickup games, incompetent) team bound together by opposition to a common foe. To the extent that shooters communicate your contributions in the forms of scores, points, server-wide notifications, or MVP awards, relatedness will be satisfied – to say nothing of what you can get out of text and voice chat. But even most modern shooters have single player campaigns that somewhat mimic this and put you in the role of someone important to those around you.
Of course, none of these motivators is unique to shooters. They show up in good game design across all genres and themes. But violent shooters usually hit on all three, and Rigby and Ryan believe that’s there’s a big overlap between what makes an effective shooter and what satisfies multiple facets of all three of these psychological needs. So while RPGs might nail autonomy, platformers may demand competence, and MMOs may allow the most relatedness, violent shooters fire on all three cylinders.
“[Violent games] are fun not because of the blood and gore,” write Rigby and Ryan, “but because games of war and combat offer so many opportunities to feel autonomy, competence, and the relatedness of camaraderie rolled up into an epic heroic experience.” But, that all said, do shooters satisfy all these motivators so well because they’re violent?
It’s an important question, and Ryan, Rigby, and their colleague Andrew Przybylski published a 2009 study in the Personality and Social Psychology Bulletin that addresses it. Part of their research involved a clever experiment where they modified Half-Life 2 to create a high-violence version of the game’s multiplayer and a low-violence version. The high violence version is pretty much what you’d expect. The low violence one, though, was created by changing the bullet-spewing guns into “tag” tools that players would use to zap opponents. Once tagged, foes would freeze and float up into the air for a second before being harmlessly teleported to a “penalty box” where they would wait to respawn into the game. So the main difference – arguably the only difference – between the two groups was how much violence there was in the game. Everything else was the same: the level layouts, the controls, and all the other stuff that satisfied competence and autonomy (unfortunately they didn’t examine relatedness). Only the violence was teased out of the equation
What did they find? Well, a lot of things. But one interesting finding was that the games in either condition were found enjoyable and both games satisfied the basic psychological needs of competence and autonomy. Even whether or not a person was naturally aggressive and normally enjoyed violent games didn’t matter once you accounted for competence and autonomy.
To me, this is vastly interesting and argues for alternatives to the go-to trope of violence and gore if you’re looking to draw people to games. It’s not the bloodshed as much as it is feeling like you’re able to make what you want happen on-screen. It’s not fetishising guns and explosions as much as it is the ability to use tactics and choose among meaningful options on the road to victory. It’s not the military themes as much as it is feeling like you’re an important part of a team.
Sure, war and military heroism are themes and experiences worthy of exploration, but there are other options that can be just as effective. Gamers may be happy to just keep buying the same game over and over again without understanding a thing about self determination theory, and publishers may only want to greenlight games that look like smash hits from the past without caring about mechanisms for satisfying psychological needs, but developers who think about these things and play around with them can definitely do something both great and different.
Ouya, the open Android-based console designed by Yves Behar, is being shipped to its Kickstarter backers today, and the company officially announced this week at GDC that it will hit retailers in the US, UK and Canada on June 4. Ouya is promising “hundreds” of titles for the June 4 release and the $99 console will be available at Amazon, Best Buy, GAME, GameStop, Target, and the store on OUYA.tv. Additional controllers will be sold for $49.99. And for digital purchases, consumers will be able to get pre-paid cards with redeemable codes at retail if they wish.
The company said that over 8,000 game developers worldwide are currently developing games, including both up-and-comers and more well known game makers like Square Enix, Double Fine Productions, Tripwire Interactive, Vlambeer, Phil Fish’s Polytron Corporation, and Kim Swift’s Airtight Games. “The majority of devs so far are experienced devs who’ve never built an Android game before. About 1 out of 5 have never even built a game before,” Ouya CEO Julie Uhrman said that at the GDC unveiling. She boasted that Ouya “already has more titles a couple months before launch than any console has ever launched with.”
The Ouya hardware itself is even smaller than we had previously thought (think Rubik’s Cube or smaller), and its sleek design and brushed aluminum is pleasing to the eye. Uhrman, however, stressed the controller more than anything else. “What we spent the most amount of time on is the controller. We really want this to be our love letter to gamers,” she said, adding that Ouya focused on the ergonomics, the weight, the feel, and wanted it to be a precise, accurate controller. “This is one of the pieces of Ouya that evolved a lot based on early supporter feedback,” she continued.
Apparently, the feedback led to numerous changes on the controller in terms of button placement, and the style of d-pad. The team found out that many preferred a cross-style d-pad than a disc because it’s superior for fighting games. Also, the engineers retooled the tension of the analogs and the design of shoulder buttons. And Ouya even made the responsiveness and speed of the center touch pad customizable. In this journalist’s hands, it felt comfortable and familiar while playing a few titles.
After showing off the hardware, Uhrman dived into the user interface of Ouya. The whole UI is incredibly streamlined, with four categories and an apps-like layout. The four categories are Play, Discover, Make, and Manage (which is for settings). Play is simply where anything you’ve downloaded – games or music or video apps – will be placed. Discover is the store, and it’s been designed to encourage people to “find the best games.” For example, sub-selections in Discover include featured channels like Go Retro, Hear Me, Genres, and Sandbox. The plan is to offer more descriptive names for games within genres.
“The way games get exposed in the genre list is based on what we call the O-rank, which is our fun algorithm. It’s how we rank great games. A lot of app platforms today use downloads as a metric or they use revenue as a metric and we don’t think that’s a good way to say if it’s a good game,” Uhrman said. “You could download a game and never play it again. And with the free-to-try model, revenue isn’t necessarily the best model either. What is [a good metric] is what proves that the game is fun, and that’s engagement. So things like how long you have played a game, how many times you’ve played that game over a certain period of time. How quickly from the time you boot up Ouya, which is an always-on device, do you play that game… It’s those types of engagement metrics that we think prove it’s a fun game.”
Another interesting area within Discover is Sandbox, which offers developers an opportunity to put builds up and ask people to thumb it up. The idea is for great games to get out of the Sandbox and be searchable and merchandized. It encourages developers to market their games and promote them to fans. Once you get out of Sandbox you know the people next to you have great quality games, Uhrman explained.
The Make channel is an area that appears to still be in flux. Uhrman said the goal is to serve two audiences, gamers and developers, equally. While Make is a place where a developer can upload early builds, over time it’ll be a place for devs to communicate with fans. “We also can grow it to be, what if you want to make a game, here’s how to market a game, etc. We’ll look to devs and gamers for feedback on how to evolve the section,” Uhrman said.
A console that’s as open as Ouya should have a fairly simple submission process for developers right? Uhrman confirmed that it’s not overly complicated and should be something most can complete within an hour. “It’s something we thought a lot about given that we’re an open platform… but we wanted to make sure that there are good quality games, at least to the extent that it was optimized to the television and for the controller. So the guidelines isn’t necessarily a quality review, but it checks if there’s malware, does it break or freeze often, does it use our controller schema in the right way, we need to make sure there’s no IP infringement, no pornography, does it elicit real-world violence, you are who you say you are kind of thing – that’s the review. We try to keep it under an hour. Developers can choose to go live immediately or they can choose a certain time,” she detailed.
Curiously, there’s been no partnership reached with the ESRB to rate the games in North America. Right now, the games will be self-rated by devs and community reviewed. Given that Ouya is being sold in mainstream retail, however, we do have to wonder if this will pose potential problems for the company in an atmosphere where some people are still pointing fingers at violent video games. “We’ll take it as it comes; right now we want to expose great content from any type of developer and we do have the thumbs-up/like feature or the report if this is abuse on the system,” responded Uhrman, adding that “We basically say that we can change the rules at any time and we can reject the game for any reason that doesn’t fit our content guidelines – we want everybody on Ouya to have a great experience.”
Ratings aside, one of the big questions surrounding Ouya is whether or not it can truly carve out a market for itself in the console space as industry veterans Sony and Microsoft prepare to launch their respective next-generation systems. The games we saw on Ouya are not graphically intense and are very indie in nature. Can Ouya handle high fidelity triple-A releases? Or does it even need to in order to get noticed?
Ouya does has a partnership with OnLive, so that’s one way to get triple-A games. “That’s one solution. We also support 1080p, hi-def… and we have a USB port so someone can add an external hard drive, so for games that are heavy you could absolutely use that. We have a max download size of 1.2GB for the first download, but as a developer if you want to add and send additional content from your servers you can,” Uhrman said.
“Traditional games take longer to develop, and we have some of those in development that we’re really excited about. Ouya is not about the number of polygons on the screen,” Uhrman acknowledged. “That’s not where we went. We wanted to have innovative and creative exclusive content, and we’re already starting to see that.”
Exclusive content plus a very appealing $99 price point is what could make the system an easy impulse buy for many gamers Uhrman believes. Moreover, Uhrman noted that most core gamers tend to purchase more than one console, so Ouya is likely to be something they’ll want to buy even if they are getting a PS4.
“Ouya offers something different; every gamer has a different expectation depending upon the platform and we believe we’re going to have innovative, creative games and exclusive games to Ouya… And the barrier to entry at just $99 where every game is free-to-try, I think opens up the opportunity for a number of gamers, even core gamers. Core gamers on average own more than one console. We don’t really think it’s an either/or situation. We’re offering something different – I think they’re going to want Ouya too,” she said.
A number of traditional consoles in the past have launched selling at a loss. Since Ouya is built with off the shelf components, it may be easier to contain costs, but Uhrman wouldn’t confirm that each unit is sold at a profit. “We’re really comfortable with our business model,” is all she would say.
That said, if things go the way Uhrman would like, this is only the beginning. Ouya will continue to evolve its software and hardware, and the hardware is likely to get refreshed quickly.
“We’re like any other software platform that iterates and grows over time, and we’ll have a hardware refresh rate more similar to a mobile refresh rate than a console refresh rate because we want to take advantage of the best chips out there and falling commodity prices. We will certainly make sure that there’s enough content that’s optimized for that chip and we don’t push on higher prices to the consumer,” she said.
Does that mean some Ouyas in future will not be compatible with certain games? Uhrman is looking to avoid that scenario. “We have a plan where all content will be compatible with future Ouya systems; we don’t want to fragment our own market for developers, and we always want gamers to have a great experience,” she commented.
Ouya will be interesting to watch. It’s a bold move for the industry and everything we’ve seen so far is completely unconventional. Whether or not that will pay dividends in the long-run is hard to judge at this point in time. “The market is calling us the ‘un-console’ and we like doing things the ‘un-way’,” Uhrman remarked.
Warhammer 40K owner Games Workshop has confirmed a new licensing deal with Roadhouse Interactive to develop new titles for mobile space based on the franchise. The developer, who is based in Vancouver, describes the new Warhammer title as a side screening action game.
While Roadhouse confirms that the game is in development, the end mobile platforms that will see the released version of the game are still up in the air at the moment; but more information is sure to be coming in the months ahead, according to the studio.
The Warhammer 40K has had others attempts to capture the tabletop war game in video form before. These Warhammer offerings have met with mixed reviews, but this new title from Roadhouse will be a first for Warhammer 40K in the mobile space.
Video game research firm EEDAR, which already has a proprietary database of over 100 million internally researched data points from more than 90,000 physical, digital, mobile, and social game products, is gearing up for the launch of a new service to assist mobile and social developers. EEDAR said that its new suite of mobile. Tablet and social products will aim “to improve sales potential and game quality for titles utilizing in-app monetization.”
EEDAR said that one of the most important things a developer can do is to optimize a game before launch. “EEDAR is able to provide an assessment at any point during the development cycle and accurately project key performance measurements of the final product, in addition to a qualitative assessment that provides feedback from the perspective of a professional game critic and consumers,” the company said about its new product suite.
Jesse Divnich, VP of Insights at EEDAR, to get an overview of the key takeaways from the firm’s research on the mobile and social markets. Divnich stressed that developers must be prepared with their in-game monetization strategy for retention and boosting conversion rates before a title is released into an app store.
“When the mobile game market was emerging, developers could optimize key monetization features after a game’s launch. The onboarding acquisition process had a long tail. Today, due to competition and larger consumer awareness, the time to peak engagement is rapidly shortening,” he noted.
“Facebook/Social games are a perfect example. Games like Farmville took nearly a year before they reached their peak users. It gave Zynga ample enough time to adjust game features to increase engagement monetization rates. Now, Social games are peaking within weeks and this idea of always being in ‘beta’ quickly shows its weaknesses when you are onboarding the majority of your lifetime users in only a few weeks,” he continued. “The mobile market is beginning to reach that point. Mobile games are making more headlines, consumers are becoming aware of hit titles faster. Simply put, consumers are engaging mobile games closer to a game’s release date and sleeper hits are becoming less prevalent.”
Even getting highlighted by Apple doesn’t mean what it used to. Developers can squander a great opportunity if they don’t make an effort to optimize. “Being featured by Apple no longer means weeks or months on the top charts. At most you have seven days and if your title is not fully optimized, you will leave money on the table,” Divnich added. “Going forward, developers must ensure they’re launching with maximum optimization, both from an artistic and scientific perspective. This means dedicating more resources to pre-launch analytics and qualitative testing.”
So what are some other notable mistakes developers are making? Well, mimicry certainly isn’t helping. Just because something works in one game doesn’t mean it can be successfully “borrowed” for a different game.
“There are still a large chunk of developers that are still too short-sighted. Clash of Clans has been a top seller for a few months and nearly 50 percent of the concepts and vertical slices that come across my desk in some way or another have an 80 percent overlap of Clash of Clans’ engagement loop. After we perform our assessments, some developers are disappointed to learn their retention, conversion, and monetization rates potential are a fraction of the results Clash of Clans has produced,” observed Divnich.
Even if your game is successful at the start, retention is a real problem, as it’s hard to create a game that has legs. “Competition within the mobile markets is at its fiercest, and every week there are at least seven high-quality releases trying to fight for our attention. The increase in competition, media coverage, and consumer awareness has driven down retention rates, for some genres, to dangerously low levels,” Divnich explained.
The key, he said, is to drive connectivity with a very attractive multiplayer component. “Right now, the tried and true method for improving retention has been multiplayer and social features. The correlation between retention rates and the inclusion of multiplayer and social features is ridiculously high,” Divnich noted. “We do issue caution, however. Just because games with strong multiplayer and social support sell well doesn’t mean slapping on a multiplayer component will automatically make your game a success.”
“We’ve seen this trend occur in the traditional HD gaming space. Call of Duty: Modern Warfare created a multiplayer frenzy and everyone thought by cuffing on a multiplayer component their game, too, would be a success. While it helped for some, those that tacked it on were met with lukewarm or disappointing reception. We still encourage our developers to implement new ways of approaching multiplayer and social features, but how they are implemented is key to improving retention rates,” he continued.
While the mobile/tablet space is getting all the attention these days, and social gaming on Facebook has seen sharp declines, that doesn’t mean developers should automatically ignore the social space. There can be opportunities there as well, especially if developers optimize their titles.
“The social platform is still viable and profitable for many developers,” Divnich remarked. “Two years ago developers were fanatic about releasing on the social platform, but they oversaturated the market. There was too much choice in a market, there were no switching barriers for consumers, and there existed too many rip-offs of the standard Farmville or Bejeweled engagement loop. Additionally, Facebook couldn’t keep up with the demand for innovation. Being a platform where consumers violently resist change (e.g. Timeline), it’s difficult to support new tools and back-end features for developers without changing the whole experience altogether.”
“Developers can still be profitable on social platforms, but we certainly approach that space more cautiously,” he concluded.
Sony announced its Playstation 4 console last month, with most of the firm’s event devoted to the AMD accelerated processing unit (APU) that will drive the console. Now Nvidia has said that despite its chips not powering Sony’s next generation games console, games developers programming for the console can use its Physx technology.
Nvidia’s Physx technology is a physics library that works on PCs and current generation consoles. It’s no longer limited to the firm’s own GPUs, meaning that AMD’s APU can execute Physx code properly, though perhaps Nvidia would argue slower than its own chips.
Aside from Nvidia’s Physx software, the firm’s Apex SDK also boasts support for the Playstation 4. Nvidia’s Apex is a set of tools that allows games designers to rapidly develop models and interactive game content. Mike Skolones, product manager for Physx at Nvidia said, “Great physics technology is essential for delivering a better gaming experience and multiplatform support is critical for developers. With Physx and Apex support for Playstation 4, customers can look forward to better games.”
Nvidia still wants games developers to use its tools despite not being in at least two of the three next generation games consoles, because it gives the firm a chance for its desktop graphics cards to win benchmarks when games are ported to the PC.
I haven’t played any of the Dead Space games, so I can’t comment on the criticisms that Dead Space 3 sold poorly because of game content or the way in which it dumbed down the gameplay experience to appeal to a broader audience. I can talk about how I see the microtransaction and other changes that vocal fans derided fit in with Electronic Arts’ broader strategy.
The games market is polarizing. The big are getting bigger (see Grand Theft Auto and Call of Duty beating first week sales records year after year) while the niche is becoming more viable (see every indie game on Steam) while the middle is getting squeezed (see THQ, Eurocom and dozens of other midsize developers). The emergence of digital distribution has brought along a bigger change than many people realized, driven by two different properties:
- It is cheaper than ever before to distribute content
- It is possible to have unique, personal, one-to-one relationships with every customer
The strategies that EA are putting in place reflect this reality.
The variable demand curve
The past 30 years were about putting games in boxes, shoving them in shops and trying to sell as many as possible. The price was basically fixed at around £30-40, so the only way you could make more money was to do more volume, i.e. sell more copies. You could also try to maintain the price for as long as possible by restricting price reductions and limiting trade-ins. What you couldn’t do was to connect with your fans in any meaningful way.
We no longer live in that world, except perhaps for the very biggest blockbusters. We live in the world where there is a bewildering choice and variety of games available to us. At the same time, development costs for AAA games are enormous and rising, while the market is not getting bigger. In fact, that subset of the market is shrinking as players are distracted by the many different ways, times and devices they can play games on.
There is only one solution. It is to find a way to use the initial launch of AAA game as a starting point in your relationship with fans. It is to start the long process of turning games from one-off purchases to long-term relationships. It is about using games to engage with and retain players, to convert some of those players into fans and to convert some of those fans into superfans. In the process, niche AAA games that are not viable using the blockbuster, fixed-price-massive-volume model can become successful long-term businesses.
Viewed through that lens, everything that EA is doing makes sense. It is trying to use its games as the starting point of the relationship. Sometimes those games are free (as in most of its mobile, tablet and online strategy). Sometimes they are paid (as in its console strategy). What they are trying to achieve is a revenue model which means that those people who love their games, who keep playing, who are vocal and demanding, are given an opportunity to spend lots of money on the products that they love. It is the only way for niche AAA games to survive.
I don’t know why Dead Space 3 didn’t do well. I don’t know if it was about poor design decisions, a change of focus or gamers voting with their wallets and not supporting a game with microtransactions on principle (EA will have data on how many users engaged with microtransactions. Answering the other questions will be harder).
But I don’t think gamers should view any rumored cancellations of blockbuster projects as a victory against microtransactions. Finding a way for the biggest fans to pay lots of money to get things they truly value is the only way to support niche AAA games (and by niche, I mean anything outside the top 4 or 5 games released every year). EA may not have got the exact model right yet, but they are experimenting. The failure of the experiment does not mean that EA will abandon microtransactions: it means that it will abandon anything other than blockbuster games and tablet games.
Is that what you really want?
Nicholas Lovell is director at GAMESbrief, a blog about the business of games. He provides business advice on free-to-play and paymium design. He will be giving a masterclass on how to make money from free-to-play games in San Francisco on Sunday March 24, just before GDC. You can also book one-to-one surgeries.
Since Sony decided to keep it simple and talk about games and everything except the actual hardware inside the Playstation 4, AMD’s John Taylor not only decided to write a blog post and elaborate on it, but also gave quite a good hint on what we can expect in the near future.
First of all, we noticed that John Taylor, previously working as Director of GBU Marketing has now become the Vice President of Global Communications and Industry Marketing at AMD, so we are quite sure that we will see quite a few interesting things from him down the road. In case you missed it, John Taylor was leading the product communications at AMD from 2006, before joining the GBU marketing team.
Although he does not reveal any precise details regarding the APU itself, John did shed some light calling it a semi-custom APU. As you already know, an APU is a single chip that combines the CPU And GPU with various system elements including memory controllers, specialized video decoders, display outputs and similar things. What makes it interesting is the actual level of customization that can be done for customers that have a very specific demands.
If you read between the lines, it is quite clear that the APU inside the Playstation 4 will not be the last custom part will see. It pretty much all but confirms that AMD has scored the Xbox Next win as well completing the “Holy Trinity” of consoles. The customization might be an interesting deal as it also means that Xbox Next APU might be a bit different than the one found in the PS4. Of course, it could still end up with the same AMD Jaguar CPU cores that are the main part of the PS4 APU probably the similar GPU part but with such a level of customization, anything is possible.
AMD’s VP of Global Communciations ans Industry Marketing, John Taylor, finished its blog post with quite an interesting line stating that this is going to be a very exciting year for gamers, especially for those with AMD hardware in their PCs and consoles as AMD has even more game-changing announcements still to come.
There is no “perfect answer” to doing business with video games. Let’s call a halt to the pointless “zero-sum” debates that blighted 2012
A day in which you learn nothing is a day wasted; by which standard, a year in which we learned nothing would be a pointless waste of time indeed. It’s worth, as 2012 draws to a close (all that’s left now is the few days of indulgence before the year, in harmony with our waistbands, croaks its last), thinking about what we’ve learned. What did 2012 teach us that we did not before? Never mind, for a moment, the money we earned or lost, the games we played or made; did we grow? Did we advance? Did we learn?
From a business standpoint, certainly, we learned a great deal. 2012 cemented the place of mobile in the gaming ecosystem, forcing all but the most ardent refuseniks (so Nintendo and… er… that’s about it) to recognise mobile as an important part of their business – and even those who were slow to react to the rise of mobile gaming seem determined not to be left behind as tablets gain steam, with 2012 having shown us pretty clearly that the iPad and its myriad imitators are on track to become the primary data device of many consumers in the coming years.
We also learned some things – although not enough, I reckon – about where price points are heading. Freed of the artificial barriers to entry which define console platforms and physical retail, the App Store and Google Play have shown us where prices for digital content will inevitably trend towards – zero. In 2012, more entertaining, successful games than ever before launched at the princely price point of absolutely nothing. Plenty of others didn’t debut at far above 99p, and several of my favourite games of the year would have given me change from a £10 note. Free to play, with all that it entails, remains in its infancy, but is clearly going to be with us for the long haul; hopefully 2013 might be the year when the industry stops having ill-tempered hissy fits about this fact, and starts engaging with making F2P work better rather than loudly and pointlessly damning or exalting it at every turn.
That, perhaps, is a reasonable lead-in to something that I don’t think we learned this year, as an industry – we didn’t learn to stop being afraid of zero-sum games that don’t really exist. Discussions about mobile gaming, even among supposed professionals and experts, often descend into abject ridiculousness due to an insistence that mobile games will come to replace all other kinds of games, or that they are doomed to be a cynical, low-quality niche – neither of which position stands up to the slightest moment of intellectual scrutiny. The same applies to the vitriolic arguments about free-to-play which have washed over and back across 2012 like a stinking, polluted tide – when one side insists that everything will eventually be F2P, and the other insists that F2P is intrinsically evil and wrong, you’re no longer dealing with professional debate, but with dumb fanaticism.
I’m not saying, by the way, that we should all be cautious fence-sitters – there’s no virtue to sitting on the fence simply because it’s comfortable. Strong beliefs are good, but meaningless unless tempered by reason and fact. The fact is that cinema did not kill theatre, television did not kill cinema, video games have yet to viciously murder books, home recording did not kill music and video did not kill the radio star. Media and entertainment industries are ecosystems that accommodate an extraordinary range of different kinds of product and different business models – and that is not ever going to change. The idea that one form of entertainment, one form of business model or even one form of distribution will emerge to Rule Them All, is simply an idiot’s fantasy.
I say that with absolute confidence, not just because it is supported by countless years of history and the sheer wealth of culture and entertainment they have bequeathed to us, but because I recognise where the belief springs from. It’s the unique curse and blessing of the games industry that it teems with “left-brained” people – logical, analytical, mathematical, and quite different from the “right-brained” people who often dominate other creative industries. Video games were born with both feet firmly in the sphere of technology, only gradually moving to straddle the worlds of both technology and art – a marriage which is superbly creative but often fraught, as evidenced by the hissing recoil of many gamers and industry types alike when presented with the (stonkingly obvious) fact that games are an artform.
Left-brain people (yes, modern psychology dismisses this terminology, but it’s so much more polite than grouping you all as “geeks” and “arty types”, isn’t it?) love perfect answers. They like problems which have a correct solution, and see the world in those terms. In many industries, they’re perfect business leaders – there absolutely is a single most efficient way to extract oil or metal from the ground, to build an aircraft, to lay out a road or rail network. In entertainment, though, the idea of a “perfect” solution runs into a huge set of problems which utterly stump the left-brained – sentiment. Emotion. Irrationality. Sheer outright bloody-mindedness.
The fact is – nobody needs entertainment. Not really. If video games, films, books, music, plays, TV shows, paintings and sculptures all disappeared tomorrow, we’d be a much diminished species, but nobody would die. People need shelter, food, clothing, transport, protection, fuel – but entertainment is “discretionary”. It says so right there in your accounts. It’s spending at your discretion – and what that means is that it’s spending guided not by optimisation, but by sentiment.
Is free-to-play the most efficient way for money and experiences to change hands between developer and player? Is mobile or tablet gaming the most cost-effective route for consumers to engage with video games? Yeah, maybe – but what so few of us seem to really grasp is that this doesn’t actually matter. Is MP3 music the perfect balance of quality, convenience and file size? Probably – but vinyl shops thrive and specialist services offering “lossless” quality music files are on the rise. Is Kindle the best way to consume books? Yes, undoubtedly – but I don’t think of myself “consuming” books. Some books I just read; some I own; some I treasure. Sentiment; emotion; irrationality. I went to a shop and bought a leather-backed volume of a book I already own in paperback and Kindle alike. I’ll probably never read it. I love it. Am I an idiot, failing to see that this is not the optimal consumption path and bound to realise the error of my ways eventually? No, because this is my discretion; this is how I choose to enjoy and to spend on my pastime.
That’s why the zero-sum game will never come to pass – not as the strident debaters of 2012 believed. A very large number of consumers will still want things like dedicated gaming hardware, expensive full-price releases and physical products, not because this makes “sense” in an economic or logical way, but because they love those things and because, beyond straightforward questions of affordability, “economic sense” isn’t a welcome guest in deliberations about your hobbies and your passions.
The industry evolves and changes – never as rapidly as it did in 2012, though 2013 will probably make our heads spin just as fast – but little is truly lost. We don’t sell petrol, or sliced bread, or concrete, or train tickets. We sell experiences and emotions, and people will choose to consume those in the way that makes them feel best, not the way that is most coldly, mathematically efficient. Nobody fears that releasing Shakespeare adaptations on DVD will shut down theatres, or that allowing buskers onto the streets will eventually lead to concert halls being demolished. It’s time that we, too, learned that the expansion of the games business leads to more opportunities and more diversity, not to an existential threat to things we love – or worse, a chance to gloat over the imagined demise of things we hate. If you’ve got one new years resolution to make for 2013, make it this one – no more zero-sum arguments. Mobile won’t kill console. F2P won’t kill full-price. Cloud won’t kill local. The forest grows ever bigger; the old tree doesn’t block the sunlight from the new trees, the new trees do not strangle the roots of the old.
Electronic Arts is consolidating some of its online gaming efforts. The publisher is taking its free-to-play gaming hub, Play4Free, and folding it into its online storefront, Origin.
EA already has a free gaming section set up on Origin, with links to all of the Play4Free titles. The same section also plays host to additional EA efforts like Crossfire and more casual games from the publisher’s Pogo casual gaming brand, including Word Whomp and Monopoly: The World Edition.
The Play4Free brand has been home to seven of EA’s free-to-play online games, including Battlefield Heroes, Battlefield Play4Free, and Need for Speed World. Even with the additions, the Origin label doesn’t extend across all of EA’s PC microtransaction titles; Tiger Woods PGA Tour Online continues to operate apart from the Play4Free brand, and the game’s official site gives no indication that it will be moving to Origin. Additionally, EA runs a number of games on Facebook, including The Sims Social and Outernauts.
Play4Free was announced in 2008, and officially went live with the 2009 launch of Battlefield Heroes. Origin is a comparatively younger endeavor, having been unveiled and rolled out in June of 2011.
Crysis 3 is one of the most anticipated game titles and it appears that the PC version will feature high-res texture pack from day one.
According to a post over at PCGamer.com, Crysis 3 will feature high-res texture pack as well as some advanced graphics options that will put that console version to shame. As you remember, Crysis 2 only featured v-sync, resolution, HUD bobbing and general quality settings before the famous patch. Crytek and EA are not going to make the same mistake and will include a great deal of settings that will make the PC version much better than the console version.
The list includes game effects, objects, particles, post processing, shading, shadows, water, anisotropic filtering, texture resolution, motion blur amount and lens flares.
In any case it sounds like really good news for PC gamers.
EA has finally revealed minimum, recommended and high performance system requirements for the upcoming Crysis 3 first-person shooter and, unsurprisingly, if you want to play it at high performance settings you’ll need AMD’s Radeon HD 7970 or Nvidia GTX 680 graphics cards paired up with a decent CPU.
Posted over at Crysis.com, the system requirements are pretty much in line with what expectations, and Crysis 3 will run on Windows Vista, Windows 7 or Windows 8 OS. The minimum requirements include at least a dual core CPU, DirectX 11 graphics card with 1GB of VRAM and 2GB of memory (3GB on Vista OS). As an example, EA offered Intel’s Core 2 Duo E6600 paired up with GTS 450 graphics card or AMD’s Athlon 64 X2 5200+ paired up with Radeon HD 5770.
The recommended specs take these specs a notch higher to quad-core CPU and 4GB of system memory with examples like GTX 560 paired up with Core i3-530 or Radeon HD 5870 paired up with Phenom II X2 565. The high performance requirements include “latest DirectX 11 GPU” and “latest quad-core CPU” paired up with 8GB of system memory. The examples are Intel’s Core i7-2600k paired up with the GTX 680 or AMD FX-4150 paired up with Radeon HD 7970.
Crysis 3 is scheduled for February 2013 release and will be available for PC, Xbox 360 and Playstation 3.
News of yet another PlayStation 3 hack is unlikely to be greeted with too much surprise, but the damage wrought by the release of the LV0 bootloader keys last week could have serious repercussions – not just in terms of PS3 piracy but also for the long-term security of the PlayStation Network.
Up until now, Sony has coped relatively well with the multiple breaches of its security that have occurred over the last couple of years. The original PSJailbreak was built around an exploit in the USB interface present up until firmware 3.41, and that hole was plugged by Sony within weeks. Hackers managed to run a small amount of games built for later system software revisions but through mandatory software upgrades, access to the PlayStation Network was off-limits for those who remained on the hacked firmware.
Then, disaster. Inherent weaknesses in Sony’s encryption algorithms were unveiled by hacker group fail0verflow, swiftly followed by the publication of the metldr “master key” from the infamous Geohot. PlayStation 3 was blown wide open – seemingly irrevocably – from two fronts. Not only could all aspects of the system be decrypted with the master key and then reverse-engineered, but thanks to fail0verflow’s signing tools, the code could be repackaged into a form that the PS3 was happy to process. The era of the “custom firmware” was upon us and there was a point where every console on the market could be compromised simply through running a CFW update from a memory stick.
System software 3.60 saw Sony fight back valiantly. New encryption protocols were put in place which effectively mothballed metldr, while the specific signing algorithms used for fail0verflow’s tools were blacklisted. Encryption keys were changed so new software would not run on older firmware, and Sony even released a revised console with changes to the Cell architecture that addressed some of the exploits hackers were using to gain access to the PS3 hardware – even the metldr key was changed on this new hardware. Access to the PlayStation Network was completely locked out on hacked consoles.
There’s little evidence that the hack which saw PSN’s servers compromised in one of the biggest security fails in internet history had much to do with the breaches that preceded it. The hack was server-side and there Sony was running traditional hardware with open source software, which had vulnerabilities of its own. It’s telling that even after PSN was restored to service, the underlying protocols by which PS3 “spoke” to the servers hadn’t changed so much at all.
However, the hackers were not done with PS3. A new “jailbreak” based on another USB dongle appeared last year, dubbed “TrueBlue”. This allowed newer games to run on older, compromised firmware 3.55 PlayStation 3s. It worked through the hackers decrypting newer games and then re-signing them with a variant of fail0verflow’s tools. This time there was no exploit in Sony’s USB code: instead the hackers released their own firmware which would not function without the dongle attached. In short, it was a crude way to monetise the fact that someone, somewhere had somehow managed to retrieve decryption codes from Sony’s latest OS updates. At the same time, the unique “pass phrase” buried within the firmware that allows PS3s to connect with the PlayStation Network was also leaked – and then leaked again after Sony changed it.
If there’s any silver lining to the new PlayStation 3 hack, it’s down to the fact that only decryption has been hacked, not encryption. This means that only older consoles running 3.55 firmware or lower can be used with the latest piracy-enabling firmwares. Consoles running 3.56 or later can’t run any kind of unofficial code.
So how was it done? Despite locking down metldr, there remained one further vulnerability – one that Sony simply cannot revoke: the bootloader key. If you still have an untouched PS3 from the 2006 launch, you can power it up and update it to the latest 4.30 firmware. Every PS3 requires the means by which to decrypt any firmware update – past, present or future. That’s what the so-called “lv0″ bootloader key does, and that’s the final element of PlayStation 3 security that is now out there in the public domain.
How did it get out there? All the indications are that the hackers who made the discovery – who have dubbed themselves “the three muskateers” had no intention of ever making it out into the public domain. However, one of their associates with access to their work appears to have sold it on, and the release of the bootloader keys was made in response to Far Eastern hackers looking to profit from a new wave of “custom firmware”. Rather than allow others to profit from their work, the “muskateers” went nuclear, and released the master key so any one with PS3 hacking experience could roll their own firmware. Since then, in just the space of a few days, at least two piracy enabling system updates have been released.
There’s a little good news and somewhat more bad news for Sony here. The good news is that while decryption has now been fully blown open, there is no firmware 4.30 equivalent to fail0verflow’s encryption tools – only Sony has the means to produce code that runs on any console running on firmware 3.56 or higher. The hackers meanwhile, have to rely upon the 3.55 fail0verflow tools, which can only run on un-updated consoles. Many firmware revisions have been released since then and we’d tentatively suggest that the vast majority of active consoles out there will be running on the newer firmware. At the time of writing, any new hacked code cannot be run on these machines.
So while the overall damage is most likely limited for now in terms of revenue lost due to piracy, there are still many fundamental issues Sony has to address. Firstly there’s the integrity of the PlayStation Network. Genuine, legitimate players will be playing online not only with people who’ve pirated PS3 software, but have the means to adjust any game data they want. Pirate games run from read/write PC hard drives rather than read-only optical media making customisation much simpler – maps could be altered for example to give hackers an unfair advantage in a first-person shooter. Sony can address this by changing the “pass phrase” which allows PS3s to connect to PSN, but this brings us nicely to the second major problem: how to tackle the leak of the lv0 bootloader keys.
The problem here is that any change Sony makes to the PS3 software has to be read by the PS3 – and that’s what the bootloader does. The PSN pass phrase can be changed, but that change needs to be integrated into data that lv0 decrypts – and thus it can be read by hackers. Similarly, new games coming out can be re-encrypted with keys not present in current firmwares – but they need to be delivered to the console via an update that (you guessed it), lv0 – and thus, the hackers – will be able to read. Now Sony can make it harder for those keys to be revealed, they can encrypt to many hundreds of layers if they need to – but at the end of the day, the beginning of the process always begins with the bootloader, which has been irrevocably compromised.
In terms of guaranteeing the validity of the console attached to the network, Microsoft has been far more aggressive than Sony thus far, and has faced attacks from a number of different sources. Consoles running custom firmware are quickly identified and banned from Xbox Live, while users flashing the DVD drive in order to run burned games have also found themselves barred from the service. But it seems that the hackers are always one step ahead, and in the here and now, pirates are still able to access Xbox Live relatively easily using copied games. Only those foolish enough to run leaked code days or even weeks before the game is released are identified as hackers and face the uncompromising wrath of the banhammer.
So where does this all leave game developers? At the most basic level, when it comes to multiplayer gameplay, the bottom line is that the system-level methods of weeding out cheats probably aren’t enough on their own: it’s going to be down to developers to add further levels of security to ensure that integrity of online gameplay. In short, exactly the sort of thing that’s been a required standard for PC gaming for a long, long time now.
Sony is facing new PlayStation 3 security headaches today, as Eurogamer reports that hackers have released custom firmware that allows for compromised consoles to go on the PlayStation Network, and LV0 decryption keys that will facilitate circumvention of future security updates.
PlayStation 3 security was largely undermined in early 2011 after hacking team Fail0verflow detailed a technique to get unauthorized code running on Sony’s console. At the time, the group said they attacked the console’s security as a response to Sony removing the OtherOS feature that allowed installation of the Linux operating system on the PS3. Eurogamer notes that Sony’s 3.60 firmware actually managed to plug many of the security holes from that event, but piracy has persisted for those willing to run older firmware and not take their systems onto PSN.
However, the newly released custom firmware contains the current PSN passphrase security protocol. And even if Sony changes that with new firmware, the release of the LV0 decryption keys means that hackers should be able to easily lay bare future security measures in system updates.
According to Eurogamer, Chinese hacking group BlueDiskCFW had planned to sell the custom firmware circumvention’s, which prompted another group called The Three Tuskateers to release the LV0 keys. They also released a statement claiming to have discovered the keys some time ago, adding, “only the fear of our work being used by others to make money out of it has forced us to release this now.”