Sony has promised to have “substantial” resupplies of the PlayStation 4 before the end of the year, but has given no indication as to what qualifies as substantial. Wedbush analyst Michael Pachter has stepped in to fill that information void, telling investors in a note this morning that he believes Sony is making PS4s at the rate of a million systems per month.
Pachter followed up on Sony’s announcement today that it had sold 2.1 million systems worldwide, saying that number fits well with previous estimates that Sony began manufacturing PS4s for retail on September 1, and that it faces a gap of up to three weeks from a system’s creation to the time it arrives on shelves.
“We expect Sony to continue to ship 1 million consoles per month, so as of the end of January, we believe Sony will have manufactured a cumulative 5 million consoles and will have shipped 4.25 – 4.5 million,” Pachter said. “We expect the 55 percent allocation to North America to continue through January, and then revert to a more normalized 40 percent of units once Sony launches in Japan and other countries. We think that Microsoft is on a similar production schedule, with similar allocations to North America.”
Pachter added that specialty retailer GameStop has been receiving roughly half of the systems shipped to North America, and that it will continue to take up that share of the allocations through December. In the New Year, Pachter expects the company’s share to be dialed back to a “more customary” 30 percent.
If the shipment projections are accurate, the PS4 would be more than holding up its part of publishers’ predictions that Sony and Microsoft would combine to ship 10 million units of their new systems by the end of March.
With the release of Grand Theft Auto Online, Rockstar has taken its blockbuster franchise in an ambitious new direction. The multiplayer world, complete with in-game economy, certainly has many of the hallmarks of a Free-2-Play title, but could GTA Online actually make it as a standalone F2P game?
Given the seismic shift the games industry has already made towards F2P, no one would be surprised if Rockstar made this next step. However, there is a lot a stake and creating a successful F2P isn’t simply a case of throwing in some in-app purchases and giving a £40 game away for free.
F2P is already established as the dominant business model for mobile and PC games. Reasons for this include the prevalence of micro-transactions and because these platforms make it relatively easy for publishers and developers to integrate analytics and use that data to make informed real-time game design changes to keep players engaged and increase retention. The transition onto console has been a slower burn – designing successful F2P games requires an understanding and skill set which isn’t necessarily native to publishers with a long heritage in designing games to ship in a box.
“Many F2P console games have come up short, offering a poor tutorial and on boarding process, plus a monetisation structure that is much closer to a used car sales man than an enjoyable experience that puts the control in the users’ hands”
As a result, many F2P console games have come up short, offering a poor tutorial and on boarding process, plus a monetisation structure that is much closer to a used car sales man than an enjoyable experience that puts the control in the users’ hands. However, the data capabilities of the Xbox One and PS4 means that F2P on console finally looks set to take off, with an impressive list of F2P titles already set for release including Little Big Planet, Planetside 2 and War Thunder.
To better understand the potential of console transition we thought we’d take a theoretical look at GTA Online as a standalone F2P title.
Our in-house design team applied GamesAnalytics’ proprietary evidenced based research methodology to benchmark key aspects of its game design against best practice F2P game design from over 80 titles.
Focusing on six main categories including Monetisation, Retention, Engagement and Virality and analysing 50 key criteria the team found unsurprisingly that GTA Online surpassed the best in genre score for Retention, Game Mechanics, Engagement and Game Overview, clearly reflecting the high quality of the game. However, if GTA Online was going F2P it would need to look at mechanics around Monetisation and Virality.
Based on these data findings, here are five recommendations to improve the F2P potential of GTA Online:
1. Improve the currency structure
Currently GTA Online has a single currency, this is fine when the game is not relying on this currency as a part of the monetisation, but for a true F2P game you would want to extend this to provide greater flexibility. Adding in a premium currency is generally the way of giving games more flexibility in delivering the F2P mechanic. Making the currency a part of the world so it feels natural is vital in making sure the monetisation doesn’t jar with the game surrounding.
There are a number of ways that people are encouraged to spend money both in the real and the virtual world. Especially for a game like GTA, it is vital that it feels natural and intuitive. Discounts and bundles are obvious incentives for getting people to invest in in-game economies, but rental and test drives are also a good way of letting players get a taste for the high life and incentivising them to keep grinding or splash the cash.
These ‘try before you buy’ mechanics are good ways of easing players onto the paying path while keeping the barrier low and the incentive high.
Giving players the ability to buy luxury vanity items using a premium currency is exactly the way you would expect Rockstar to monetise its players. The game has always been about getting rich quick and showing off the proceeds of your crimes. This is not about honest hard slog, so it’s fitting that players should be given a quick route to the high life through whatever means at their disposal. A successfully free-to-play GTA Online should also include consumables: things that the player will spend money on that give them a short term advantage or simply let them show off.
2. Introduce a VIP structure to fast track progress and reward members
“This is not about honest hard slog, so it’s fitting that players should be given a quick route to the high life through whatever means at their disposal”
There is no game that is more about being king of the hill than GTA, so a full VIP structure is essential. Imagine the retention value of being the only player that can drive around the hills of Los Santos in a purple Ferrari with gold trim.
VIP membership could offer:
Rank Point/Job Point boosts
Monthly $/Gold allowance
Access to premium clothes, vehicle paint jobs and vanity items
Special members store accessible through the iFruit with daily/weekly member offers
3. Utilise no lose gambling
We’ve already touched on the repetition which exists within GTA Online – completing mission after mission to build up your cash and accessory stockpiles. One alternative to a life of hard graft and long hours is gambling, an easy to implement F2P mechanic which fits with Rockstar’s vision and GTA’s ‘feel’. Mechanics such as magic boxes offer players a no lose gamble: spending some money guarantees something cool. There can be no better way of taking the easy route than making sure the odds are stacked.
4. Introduce a trading mechanism to help increase community aspects
If gambling isn’t your thing then a bit of business on the side can help you make it to the top. Trading in F2P games inevitably encourages a black market, but unlike other F2P games where there is a clear split between grind currency and premium currency, GTA Online F2P should allow this secondary market to exist.
Letting players trade whatever they want will encourage a free-form economy that will favour the adventurous, the ruthless and the downright corrupt. The mechanic will drive the economy and build player loyalty.
Players will buy and sell from each other, and using rare items it is also possible to use data analytics to monitor the price elasticity of items as players bid for certain items. Items can trade for 100x their original value in F2P games and can be useful to define pricing as well as delivering value and incentivising players.
5. Build in reward mechanics for better social sharing
GTA is such a well-known franchise, it pretty much sells itself. However, giving players rewards for inviting other players to join is a well-structured mechanism and can help to double your player base for little or no cost.
Giving players an incentive to invite is key, there would be nothing better than being able to pimp your friends by taking a cut of the money they spend as their due deserves for getting them in to the game in the first place.
With the PlayStation 4 and Xbox One on the scene, the next console generation has finally begun. While a new generation usually brings the promise of more graphical power, great graphics are only part of the gaming equation. What will these new consoles allow developers to do creatively?
In its last two titles, Dear Esther and Amnesia: A Machine for Pigs, independent developer The Chinese Room focused on pushing the first-person game away from the shooting mechanics that usually dominate. The studio’s next title, Everybody’s Gone to the Rapture, is coming to PlayStation 4 with some help from Sony Computer Entertainment. For The Chinese Room, next-gen helps their creative juices just by being easier to work with.
“The blunt reality is that easier production equals more creative freedom and opportunity”
The Chinese Room creative director Dan Pinchbeck
“I think the major thing, from the perspective of actually building games, is less for us about the power – that’s brilliant of course, and having significantly higher budgets makes a big difference – but it’s more about the ease of working with PS4,” The Chinese Room creative director Dan Pinchbeck told GamesIndustry International. “So far, it’s just been a dream bit of kit to work with. We’ve got the advantage of working with CryEngine, another great piece of tech of course, but even then it’s been remarkably smooth to get things up and running quickly. That’s worth its weight in gold from a production standpoint, and the blunt reality is that easier production equals more creative freedom and opportunity.”
According to Braid creator Jonathan Blow, aiming for a single, next-generation set of specifications allowed the team behind The Witness to settle on a single visual style for the game. That title is also heading to PlayStation 4 in 2014.
“Creatively, we build and we assume that we have enough power in rendering,” explained Blow. “When we were planning the look of the island, we had a couple of choices. Do we target the PlayStation/Xbox 360 class of machines or do we move to next-generation consoles? Because development was going long, we decided we were going to be in the next console cycle anyways.”
“If we’d ended up on lower-spec machines, it wouldn’t just be that [The Witness] would have lower-poly models. It would’ve affected the style all over the place; the style of the game would’ve been different. I don’t think it would’ve been as nice.”
For Ghost Games, the new shepherd of EA’s Need for Speed franchise, next-gen does come down to “more power”. This power – and the new set of expectations that come with it – frees the team to think outside of the box when it comes to gameplay innovation. A new generation allows developers to think about what’s possible instead of wringing more blood from a worn-out stone.
“It makes us think differently. Every time there is a transition we start thinking about what would be possible.”
Ghost Games executive producer Marcus Nilsson
“It makes us think differently,” said Ghost Games executive producer Marcus Nilsson. “Every time there is a transition we start thinking about what would be possible. We are not locked into old boundaries anymore. From that we get great innovations like AllDrive. The systems are giving us power to do more, more AI, more particles etc. Just turning everything up really.”
Nilsson also noted that the PlayStation 4 and Xbox One provide other options, including social networking features and second-screen modes, which “opens up creative solutions around cross-platform play.”
One of the highlights of Sony’s launch window slate for the PlayStation 4 is Infamous: Second Son from Sucker Punch. While the game simply looks amazing, improved graphics and horsepower also mean the human element of Infamous can be pushed forward.
“[Infamous: Second Son] is all performance captured,” Sucker Punch co-founder and director of development Chris Zimmerman told us. “We actually use all kinds of cameras, with dots on the actors’ faces getting mapped through 3D scans. As you see people in the game, you’ll see their faces move in realistic ways.”
“See the wrinkles appear?” Zimmerman pointed out in a demo of Second Son, “we are actually animating 15,000 vertexes in his face 30 times a second to get that to happen that well. The thing that really matters for a game like this is you can actually see the characters act. You can read his face. You have a million years of human evolution that’s trained you to read people expressions and their faces; now we can bring that to you. That is the expression that these actors had when they did the scene. If we show you the video of their faces and then show you the in-game feature, you’ll be like ‘that’s the expression that guy had on.’ It seems dumb, but it matters.”
In some case though, the PlayStation 4 and Xbox One will just allow what previous generations have allowed: more, better-looking things onscreen in our games. And even that can improve the player’s experience. For BioWare Edmonton and Montreal general manager Aaryn Flynn, next-gen means a more immersive and interactive game world for BioWare fans.
“With the next generation of consoles, the most important question we ask ourselves is ‘How does this help our storytelling?’ As we’ve worked with them, we think it starts with a density and dynamism that wasn’t possible previously,” said Flynn. “‘Density’ in the sense of more interesting things on the screen that help immerse you in the game world, and ‘dynamism’ in that they are more interactive than ever before.”
The generation has only just begun. Developers still have plenty of time to learn how to make the PlayStation 4 and Xbox One dance and sing. What’s been shown so far is pretty damn good, so let’s sit back and enjoy the future.
The issue was discovered by Bogdan Alecu, a system administrator at Dutch IT services company Levi9, and affects all Android 4.x firmware versions on Google Galaxy Nexus, Nexus 4 and Nexus 5. Alecu demonstrated the vulnerability at the DefCamp security conference in Bucharest, Romania.
Class 0 SMS, or Flash SMS, is a type of message defined in the GSM specification that gets displayed directly on the phone’s screen and doesn’t automatically get stored on the device. After reading such a message, users have the option to save it or dismiss it.
On Google Nexus phones, when such a message is received, it gets displayed on top of all active windows and is surrounded by a semi-transparent black overlay that has a dimming effect on the rest of the screen. If the message is not saved or dismissed and a second message is received it gets placed on top of the first one and the dimming effect increases.
When such messages are received, there is no audio notification, even if one is configured for regular incoming SMS messages. This means that users receiving Flash messages won’t know about them until they look at the phone.
Alecu found that when a large number of Flash messages — around 30 — are received and are not dismissed, the Nexus devices act in unusual ways.
The most common behavior is that the phone reboots, he said. In this case, if a PIN is required to unlock the SIM card, the phone will not connect to the network after the reboot and the user might not notice the problem for hours, until they look at the phone. During this time the phone won’t be able to receive calls, messages or other types of notifications that require a mobile network connection.
According to Alecu, a different behavior that happens on rare occasions is that the phone doesn’t reboot, but temporarily loses connection to the mobile network. The connection is automatically restored and the phone can receive and make calls, but can no longer access the Internet over the mobile network. The only method to restore the data connection is to restart the phone, Alecu said.
On other rare occasions, only the messaging app crashes, but the system automatically restarts it, so there is no long term impact.
A live test at the conference performed on a Nexus 4 phone with the screen unlocked and running Android 4.3 did not immediately result in a reboot. However, after receiving around 30 class 0 messages the phone became unresponsive: Screen taps or attempts to lock the screen had no effect. While in this state, the phone could not receive calls and had to be rebooted manually.
A second attempt with the screen locked also failed to reboot the phone because only two of over 20 messages were immediately received. This may have been caused by a network issue or operator-imposed rate limiting. The messages did arrive later and the phone rebooted when unlocking the screen.
Alecu said that he discovered this denial-of-service issue over a year ago and has since tested and confirmed it on Google Galaxy Nexus, Nexus 4 and Nexus 5 phones running various Android 4.x versions, including the newly released Android 4.4, or KitKat.
Around 20 different devices from various vendors have also been tested and are not vulnerable to this problem, he said.
Take-Two Interactive Software has repurchased all of the Icahn Group’s stock, a deal worth $203.5 million and involving 12.02 million shares.
“This share repurchase reflects our confidence in the Company’s outlook for record results in fiscal 2014 and continued Non-GAAP profitability every year for the foreseeable future,” said Take-Two CEO Strauss Zelnick.
“With our ample cash and strong expected cash flow, we are able to pursue a variety of investment opportunities, including repurchasing our Company’s stock. On behalf of our board and management team, I would like to thank Brett, James and Sung for their support, dedication and service to our organisation. They leave Take-Two better positioned than ever for continued success.”
The move was funded by cash and cash equivalents on hand and Take-Two explained the move is “part of an ongoing strategy to buy back its shares.”
Take-Two and Icahn gave no reason for the sale of the shares, but as previously agreed, Icahn’s Brett Icahn, Jim Nelson, and SungHwan Cho and have resigned from the Take-Two board.
The Icahn Group is overseen by activist investor Carl Icahn and this year Forbes named him one of its 40 Highest-Earning hedge fund managers. In the past he’s tried to acquire Dell, Marvel Comics and owns a ten percent stake in Netflix.
[UPDATE]: Investors did not greet the news warmly, as Take-Two shares traded at twice their average volume and ended the trading day down 5.49 percent to $16.
If a five-day test phase has been any indication, demand in the state of nearly 9 million people could be high. The total number of players logging on hit 10,000 during the first three days of 24-hour testing, regulators said.
New Jersey is the third U.S. state, but by far the most populous, to roll out online gaming. Officials hope the effort can rescue Atlantic City’s sagging casino revenues.
During testing, regulators found “no significant, widespread regulatory problems or technical barriers for going live,” said David Rebuck, director of the New Jersey Division of Gaming Enforcement, in a call with reporters.
Casinos were limited to 500 players on each site at one time during testing, and they were not allowed to advertise widely. As of midnight the restrictions will be lifted for those who won regulatory approval.
“You have to be gradual. You have to be cautious. You have to be measured,” Rebuck said, noting that casinos didn’t want to invite large numbers of players until they knew the systems could handle the traffic.
“You’re going to see accelerating efforts by them to be much more aggressive” about marketing, he said.
The casinos use geolocation services to figure out whether someone from outside the state is trying to hack in online. Such technology has been used already in Delaware and Nevada, the other two states to offer some form of online wagering, but Rebuck said regulators in New Jersey demanded “a higher standard of operations.”
Regulators and casinos sent testers out of state and asked them to try to crack into the New Jersey websites, but nobody broke through, he said.
“I’m not saying this is foolproof by any means,” he said. “Somebody at some time will find a way to get around this, and we have to be extra vigilant.”
The first test patron logged on to a site operated by Borgata Hotel Casino & Spa from somewhere in New Brunswick on Thursday evening. Many hits came later from areas within New Jersey that are near New York City and Philadelphia, he said.
Borgata, owned by Boyd Gaming Corp and MGM Resorts, is one of the six casino operators whose sites went live as of midnight Monday.
In less than a week, both the PlayStation 4 and the Xbox One will have launched in the world’s most lucrative console markets. If you had to plant a flag to mark the start of a new generation, you’d do well to find a more appropriate spot.
Well, praise be. Microsoft was justifiably lambasted for its early direction and messaging, but the ill-feeling created by that string of fumbled choices was untroubled by all subsequent attempts to retrench and appease. Since then, Sony has walked a blessed path; not exactly free of mistakes and questionable decisions, but bolstered by the knowledge that the scrutiny of both the press and the forum-dwelling public was focused elsewhere. Perhaps now hard numbers can replace the speculation and supposition. Perhaps now we will be able to see the true measure of the policy reversals and resolution deficiencies.
There is, after all, a bigger picture to consider. It can be fun to get lost in the manufactured rivalry of a console war, but both Sony and Microsoft understand that this generation must be about more than the chips in their little – and not so little – black boxes. Gaming has never been more popular, or more culturally prevalent, but a lot has changed since the console companies last played this billion-dollar crapshoot.
So much of the industry’s recent growth has happened away from the traditional world of AAA blockbusters, where audience gains have been handily outmatched by soaring expenses. The early debate may be dominated by familiar concerns over framerates and dots-per-inch, but the terms of this generation will be different from the last. Sony’s mistakes with the PlayStation 3′s esoteric architecture didn’t go unnoticed by either party, and it shows in the hardware.
“The last generation created a bunch of artificial work. You had to do things in a very different way and, in the end, it wasn’t like you got a massive amount of technical performance out of it. It was time that didn’t go into making the games better,” says Nick Button-Brown, general manager at Crytek.
“I like the fact that, this time, it’s all built on architecture that we can understand. If you look at the PS3, people only started to get the most out of the system at the end of the cycle, but that’s five or six years on. That’s terrible. I want to start getting at the most nearer the start. That’s the advantage with simpler and more similar architecture – we’ll be seeing much more from the first games out.”
Crytek is the studio responsible for Ryse: Son of Rome, a standard-bearer for the Xbox One. Button-Brown admits that, while setting a visual benchmark was the not the main objective of the project, it was a side-mission of sorts, and the pride with which he describes Crytek’s work indicates that he considers the mission very much accomplished. The smoke, the fire, the beads of sweat running down the lined, wrinkled faces of the characters, the way those characters plant their feet; these are, he boldly claims, new heights for console gaming.
“I do think we’re going to set a visual benchmark; it’s going to be very difficult for anyone to beat our visual performance. We put a lot of work into facial, a lot of work into animation, just making it all feel much more real,” he says. “Is there further we can go? Definitely. We have some high-end cinema tools that don’t run in real-time even on high-end PCs now – we’re talking one, two frames per second. Eventually, we’ll be able to run those in real-time.”
In the absence of stiff competition, Ryse has as strong a claim to the pinnacle of visual excellence as any other launch title, but Button-Brown understands that such victories are short-lived. After all, in blockbuster development, a better looking game is always just over the next hump of the release schedule. Crytek will no doubt persist in that direction, but the impact of this generation’s visual performance will not be as profound as the jump to HD, and the differences between the PlayStation 4 and Xbox One hardware will matter less still. This time, exactly what constitutes the “cutting-edge” will be harder to pin down.
“There’s always more we can do [visually], but I do think you reach a point where, for the user, they feel that it looks as good as it’s going to get, and they’re not going to see a huge difference between [the consoles],” he says. “For us, the leap is about the details. It’s not about one or two big things. It’s about being able to do small things much better: more stuff on-screen, more AI, more physics.”
It would be churlish to ignore the fact that Ryse has failed to stir the imaginations of the critics, eliciting unanimous praise for its visual detail and precious little else. My interview with Button-Brown was conducted prior to the publication of those reviews, but even then he was cognisant of the gamble creating a launch title for this particular generation represented. In the past, there were obvious, powerful hooks for developers to work with – the advent of 3D graphics and HD graphics, the availability of a hard-drive, online play as a usable tool – but this generation is more diffuse.
“Going into launch, I don’t know whether we’ve spent the resources in the right place. I don’t know whether we’ve focused our efforts in the right place. I’m only going to know that when people get to buy it,” he says.
“We talk to publishers a lot, and one of the most painful questions is, ‘Tell me what next gen gameplay is gonna be?’ It’s not something you can define. Nobody delivers gameplay because it’s next gen; you’re delivering gameplay because it’s good. That’s one of the things we struggled with [in Ryse's E3 demo]. We showed a cut-down version of the gameplay and we were criticised for that. We didn’t see that coming. We were too close, and we cut it down further than people wanted to see.”
However, while the criticisms leveled at Ryse may well be justified, a part of the problem may be that, at the dawn of a new generation, nobody is quite sure what they want to see. They only know what has gone before, and will resist any attempt to smuggle what are regarded as the bad habits of the past into the $400 future. Ryse signalled its intent with combat that closely resembled a QTE. That was never likely to go down well with the press, who instantly suspected Crytek of trying to coast on graphics alone.
“The generational leap is not as clear cut now,” Button-Brown admits. “Maybe in a year’s time we’ll have a better understanding of what the leap really is this time, as people start playing things and we start to see what really matters. I think with hindsight we’ll be able to look back and see, ‘yeah, that was the big step.’”
Perhaps it’s naive to expect more clarity on what might define this generation from developers working so closely with the hardware, but in any case, that would be no slight against Crytek. Apart from Kinect 2.0 on the Xbox One – which may finally have the hardware to honour some of the promises made four years ago – in terms of new game experiences there isn’t an obvious wellspring for original ideas on either console. Indeed, the most obvious differences in the early days of the generation are likely to be found in the service layer: social integration, voice control, multimedia functions, and other areas often dismissed as secondary to the tasks for a which a console should be designed.
This is one of the key ideas I took away from my conversation with Michiel van de Leeuw, technical director at Guerrilla Games. Essentially, the moment-to-moment experience of established genres will remain the same, but innovation will arise from, “a deeper, underlying layer.”
“It’s not like we have that one gizmo to make everything really good or different, but the way that the operating system and the games work together, it’s much more of a marriage of those two things,” says van de Leeuw. “It’s a much more holistic approach to the console. How do people use it? How do people want to use it? How do we make sure that every hour of using your console is an hour spent having fun? And almost nothing is more fun than sharing experiences with other people. It’s all integrated, and under the hood there’s a lot of complexity to make sure that you don’t notice it. A lot of magic is necessary to make it look simple.”
As a subsidiary of Sony Computer Entertainment and the developer of a key launch title, Guerrilla Games was part of the inner circle that formed around Mark Cerny during the PlayStation 4′s creation. The most taxing problem, the subject of the most meetings and debates, was how to improve the experience around and outside of the games – streaming, background downloads, switching between applications, and so on. For Cerny, “immediacy” was a watchword.
When it came to the fundamental hardware architecture, however, van de Leeuw says that the directive was relatively simple: “give us more…as many graphical gizmos as you can afford.” The extra power was a given rather than the main focus.
“I like to ask people about what the next generation should be about, and everyone says, ‘it has to be a photo-realistic, and everything has to be more. There has to be thousands of people and blah, blah, blah.’ But why is that fun? If you have 1000 people around you, do you feel more attached to them than if you just had one or two? Technology does not immediately result in a more satisfying experience. The first layer that people think about is better graphics, more of everything. And then they think, ‘What do I need more of? I don’t know, really, but there must be more of something‘.”
There it is again: the great, unknowable ‘something’ that, nevertheless, everyone is waiting impatiently to see. Killzone: Shadow Fall has fared better with the critics than Ryse, but the expectation of clear, identifiable progress is used as ammunition in the majority of its negative reviews. For van de Leeuw – who also spoke to me prior to the publication of his game’s review scores – launch titles are not necessarily supposed to alter the way people look at games as a whole, but he also makes no secret of the increasing complexity of productions on the scale of Killzone. More power can make life easier in some respects, but certainly not all.
“You have to focus on 1000 things at the same time, and at the same time as that you need to grow your company, because you need more people to focus on all of those things. That, by itself, becomes a problem, because it becomes difficult to manage the complexity brought by all of those extra people. It’s very challenging.
“We’re working with first-person shooters, and look at how incredibly complex these things are. You’re not just selling one game: you’re selling a movie, and a game, and a multiplayer experience that needs to fit with eSports, and it’s all packaged together. And it all has to be good, because the competition is incredibly, and increasingly, good.”
Indeed, it is the progress evident in individual games, rather than the super-charged hardware, that truly plants a gauntlet at the feet of the industry’s developers. Umpteen gigabytes of GDDR5 memory is not nearly as powerful a motivator to do better work as the release of, say, The Last of Us or The Walking Dead. New hardware may give developers more options, but the real skill lies in making the right decisions. When there is enough of an installed-base to offer a safety net, van de Leeuw says, the industry’s most talented developers will start taking creative risks, and new genres will emerge.
But will that innovation be exclusive to a specific platform? When a consumer makes their decision to buy either a PlayStation 4 or an Xbox One, is the potential for new ideas a relevant factor? From the developer side, ven de Leeuw says, the differences in the hardware of this generation may not offer the sort of rewards that Naughty Dog and Guerrilla wrung out of the PlayStation 3′s distinctive Cell processor. Today, with teams spiralling into the hundreds, budgets on the rise and a dozen other platforms to consider, the emphasis is on efficient tools and flexible engines. Microsoft and Sony made a conscious choice to be more similar than different in terms of architecture, with developers’ needs firmly in mind.
“Being able to squeeze more out of the console by really focusing on it allowed us, in the past, to create experiences that couldn’t be done, or would be much harder to do if we had to split our focus. But I think we’re coming to the day where the amount of effort you have to put in to do that, it’s questionable whether it’s worth it.
“Our games are getting so big. We try to make our experiences richer for gamers, but at some point… there are pros and cons. Sometimes we wished that things were easier. The [PlayStation 3] was difficult to program for, but I still sometimes I miss it because it was also very powerful. You could do a lot of stuff that’s still very difficult to replicate, but the time for bespoke architectures is slowly going away.
“If you look back, raw assembly and raw power were what enabled new experiences. Nowadays, experiences are defined or limited by how efficient our toolsets are, how smooth our workflow is, how quickly we can develop, and how much time we have to spend on mundane distractions… Bespoke architecture allows you to do cool and crazy stuff, and from a technical point-of-view I’m still in love with that sort of thing, but I have a 230-person studio that wants to make a killer title.”
Despite what many executives have claimed in calls to their investors, both van de Leeuw and Button-Brown either strongly imply or directly confirm that the cost of making those “killer titles” will rise this generation – not to the same degree as they did with the Xbox 360 and PS3, perhaps, but certainly beyond the already precarious conditions that exist today. While we pore over screenshot comparisons, declaring winners and losers over slight differences in observable visual performance, it’s worth considering what any third-party would actually stand to gain from making one version of a game significantly better than another. Indeed, at companies like Epic, EA and Crytek, the emphasis has been on creating cost-saving tools that work seamlessly across all platforms, effectively glossing over aspects of the hardware that could lead to substantial gains in performance. First-party developers will still pursue that, of course, but, according to Button-Brown, for everyone else the base-level of AAA acceptability now sits at a daunting height on both platforms.
“If anything is just okay, it’s now terrible. ‘Solid’ is a failure. You now have to be so good,” he says. “The teams are getting larger and the risks are getting higher. We’re trying to do a lot of procedural stuff in this next generation to keep costs under control. It’s one of the ways we’re trying to keep that down, but it’s still a cost increase. Each asset needs to be so much better, so much more defined, than it was in the previous generation. No amount of procedural is going to change the fact that your underlying asset just has to be that much better.”
All of that hard-scrabble at the top end of the industry – essentially, fewer companies using more resources to create and market a smaller number of increasingly large games – will have a clear upside for independent developers. Indeed, right now, the beneficial ramifications of Sony’s decision to court indies as early as possible is arguably the most significant difference between the PlayStation 4 and the Xbox One. It always felt like a smart move, and that feeling will be further justified as the paucity of $60 blockbuster releases becomes more apparent.
Microsoft’s early digital strategies and the Xbox One’s evidently underpowered hardware may have monopolised the headlines, but Oddworld Inhabitants’ Lorne Lanning believes that it’s Microsoft’s belated effort to secure the diverse, free-flow of content from the indie sector that has truly given Sony the advantage. That reluctance to open up the Xbox platform, he argues, is tied to a big-business mentality that no longer works in a connected entertainment medium – the very same mentality that led to the unanimously derided online check-ins and multimedia focus that dominated the Xbox One’s early messaging.
“ID@Xbox was a bittersweet victory,” Lanning says. “If you have your ear to the ground today, you could see that those policies were going to blow up in its face, particularly when you see what [Sony] was doing. That was an old way of thinking, a way of thinking that was all about control. It’s a trickle down from being a monopoly. There’s a reason there was a class-action suit [against Microsoft]. There’s a reason there was an SEC, antitrust thing. There’s a very good reason for that. They wanted to control everything. The people who made those policies were still thinking very much in that way, and it blew up in their faces.”
For Lanning, this will be a generation defined by consumers getting what they want, rather than what they’re given. The generation where consumers wrest control of gaming back from the companies that have controlled it for so long – platform holders, publishers, retailers – and seek satisfaction from the most agile creative forces. There may be some lingering resistance from those with vested interests in established models, but Lanning believes any company seeking to stand in the way of this intractable change is unlikely to emerge with much credit. There will be more products offering a wider variety of experiences than on any previous generation, with price-points to suit every wallet. The lines of communication are wide open. There is nowhere left to hide.
“As people are becoming more informed and more connected, the shenanigans are becoming more transparent. And with that, what we’ll get is more diversity,” Lanning says. “The industry made up of five publishers really isn’t that long ago, and now what’s going on? How many self-publishing indies are there that can get a 1.5x return on each game and keep building? Maybe they can’t grow and be 500 people by the next year, but they can add 5 more by the next year.”
I mention the prevailing fear that the marketplaces on the Xbox One and PlayStation 4 will become too crowded – that by making consoles a more accessible place for independent developers, they will lose the focus that created huge successes like Castle Crashers, Super Meat Boy and Braid. For Lanning, it’s a worthwhile trade, and one of the most important ways that indies need to “grow up” to take advantage of the incredible opportunity this generation represents. The Battlefields and the Assassin’s Creeds will continue to exist and thrive, but the average consumer knows that already. What they don’t know about are games like Octodad, Below and Everybody’s Gone to the Rapture, and more fool the studio who leaves it up to Microsoft or Sony to raise their profile.
“If we sell a game now for $10, we get $7 on digital networks. Once upon a time, we weren’t even getting $7 on a $60 game,” Lanning says. “It’s a whole different thing, but you have to bring your own visibility. That’s your responsibility. Beyond just designing the game, we have to design how to build the relationship with our audience. People know that they want the GTA and the Call of Duty, and they’re gonna be on both systems. But they also want the surprises, and they want to experiment with those surprises at below the $60 price range. The audience always wants more choice.
“The biggest earners are gonna be the big AAA titles, because they have the $100 million marketing campaigns. You can’t compete with that. But in the years to come, the big properties at E3, the $100 million properties, they will have started off in the indie space. They’re gonna innovate cheaper, faster and more with their audience right away. That’s a guarantee.”
Twitter Inc said it has put in place a security technology that makes it harder to spy on its users and called on other Internet firms to do the same, as Web providers look to thwart spying by government intelligence agencies.
The online messaging service, which began scrambling communications in 2011 using traditional HTTPS encryption, said on Friday it has added an advanced layer of protection for HTTPS known as “forward secrecy.”
“A year and a half ago, Twitter was first served completely over HTTPS,” the company said in a blog posting. “Since then, it has become clearer and clearer how important that step was to protecting our users’ privacy.”
Twitter’s move is the latest response from U.S. Internet firms following disclosures by former spy agency contractor Edward Snowden about widespread, classified U.S. government surveillance programs.
Facebook Inc, Google Inc, Microsoft Corp and Yahoo Inc have publicly complained that the government does not let them disclose data collection efforts. Some have adopted new privacy technologies to better secure user data.
Forward secrecy prevents attackers from exploiting one potential weakness in HTTPS, which is that large quantities of data can be unscrambled if spies are able to steal a single private “key” that is then used to encrypt all the data, said Dan Kaminsky, a well-known Internet security expert.
The more advanced technique repeatedly creates individual keys as new communications sessions are opened, making it impossible to use a master key to decrypt them, Kaminsky said.
“It is a good thing to do,” he said. “I’m glad this is the direction the industry is taking.”
LG is investigating claims that its TVs send details about their owners’ viewing habits back to the manufacturer.
Blogger Jason Huntley detailed how his Smart TV was sending data about which channels were being watched. It appears that TVs uploaded information about the contents of devices attached to the TV, which is probably illegal. The UK Information Commissioner’s Office is investigating too.
When Huntley contacted the South Korean company he was told that by using the TV he had accepted LG’s terms and conditions so there. Huntley said details of what channels he had been watching had been sent even after a privacy setting had been changed.
He first come across the issue in October when he had begun researching how his Smart TV had been able to show his family tailored adverts on its user interface. When he looked at the TV’s menu system, he had noticed that an option called “collection of watching info” had been switched on by default.
After switching it off, he had been surprised to find evidence that unencrypted details about each channel change had still been transmitted to LG’s computer servers, but this time a flag in the data had been changed from “1″ to “0″ to indicate the user had opted out.
The Guardian has the papers, and it shows a US National Security Agency (NSA) memo that talks about how it can collect information about unsuspected UK citizens and keep hold of their data, meaning their phone communications and their email contacts. This can then be used to build up information about links between people.
“Sigint [signals intelligence] policy … and the UK Liaison Office here at NSAW [NSA Washington] worked together to come up with a new policy that expands the use of incidentally collected unminimized UK data in Sigint analysis,” says the memo
“The new policy expands the previous memo issued in 2004 that only allowed the unminimizing of incidentally collected UK phone numbers for use in analysis. Now SID analysts can unminimize all incidentally collected UK contact identifiers, including IP and email addresses, fax and cell phone numbers, for use in analysis.”
The agreement has its roots in the 1946 UK/USA Signals Intelligence Agreement, which should prevent allied intelligence agencies from monitoring each other’s citizens without permission. However, it includes a caveat, which is that this can happen, as long as it is done in secret and in the best interest of nation states.
Governments reserved the right to stop behaving so polititely earlier, and “when it is in the best interests of each nation,” reports the Guardian, which has reproduced part of the memo.
“Therefore,under certain circumstances, it may be advisable and allowable to target second party persons and second party communications systems unilaterally, when it is in the best interests of the US and necessary for US national security…,” it adds.
“There are circumstances when targeting of second party persons and communications systems, with the full knowledge and co-operation of one or more second parties, is allowed when it is in the best interests of both nations.”
The price of a stolen identity has dropped as much as 37 percent in the cybercrime underground to $25 for a US identity, and $40 for an overseas identity.
Researcher Joe Stewart of Dell SecureWorks teamed with independent researcher David Shear to get an insider’s look at the cost of hacking services. For $300 or less, you can acquire credentials for a bank account with a balance of $70,000 to $150,000, and $400 is all it takes to get a rival or targeted business knocked offline with a distributed denial-of-service (DDoS)-for-hire attack.
Meanwhile they have noticed that the cost of ID theft and bank account credentials are getting cheaper because there is just so much out there. Part of the problem is that so many US organisations have been hacked and personal details stolen. Personal identities, went for $40 per U.S. stolen ID and $60 for a stolen overseas ID in 2011 when Dell SecureWorks last studied pricing in the underground marketplace. Now those IDs are 33 to 37 percent cheaper.
Competition among the cybergangs is stiffer as more people join in the scams, the report said.
The Salesforce Superpod will be based on HP’s Converged Infrastructure hardware and jointly developed and marketed by Salesforce.com and HP. The Superpods will be hosted in Salesforce.com’s data centers and cost customers extra money, but pricing details weren’t provided Monday.
HP CEO Meg Whitman will discuss the offering when she joins Salesforce.com CEO Marc Benioff at the start of the company’s Dreamforce conference, which began on Tuesday in San Francisco. HP says it will be the first customer for the Superpod.
It marks a significant shift in strategy for Salesforce.com, which has historically served all its customers from its multitenant cloud, where they share an application instance with their data kept separate. The emergence of the Superpod may have been provoked by demand from large customers not fully comfortable with the multitenant delivery model.
Multitenancy is a common architecture for SaaS (software as a service) vendors, as it provides advantages over traditional hosting such as the ability to update and patch many customers at once. It wasn’t immediately clear Monday if the Superpod option will mean those customers can choose to take updates on a different schedule.
The announcement is also a surprise on another level, given the high-profile pact Benioff made with Oracle CEO Larry Ellison in June. Under that deal, Salesforce.com committed to continue using Oracle software to build its own products for the long term, and also said it would use Oracle’s engineered systems such as Exadata.
Ellison and Benioff also buried the hatchet on their long-running public feud. Benioff even invited Ellison to Dreamforce, an offer Ellison accepted at the time.
But Ellison’s visit may be off now that Benioff has cozied up so closely to HP, a company that sued Oracle after it announced it would stop porting its software to the Itanium chip architecture used in high-end HP servers.
Chip Designer Mediatek has unveiled the first octa-core system on chip (SoC) for mobile devices.
The MT6592 is an ARM based processor capable of running all eight cores at up to 2Ghz and offers a scheduling algorithm to ensure that all eight cores are being managed effectively to control power draw and temperature.
The chip uses its Heterogeneous Computing (HC) architecture to act as foreman, distributing tasks to the best processor for the job, covering CPU, GPU, DSP, and multiple connectivity, multimedia, camera and display engines, including navigation, and sensor cores.
The chip is equipped with Mediatek Clearmotion for automatic upscaling of standard 24/30 frames per second video to high-quality 60fps video. Also onboard is support for 802.11n WiFi, Miracast, Bluetooth, GPS and FM tuner functions. It is also capable of running Ultra HD H.264 and new video standards including H.265 and VP9.
Mediatek Smartphone Business Unit general manager Jeffrey Ju enthused, “The MT6592 delivers longer battery life, low-latency response times and the best possible mobile multimedia experience. Being the first to market with this advanced eight-core SOC is testament to the industry-leading position of Mediatek.”
The prospect of octa-core mobile devices could have huge ramifications for the buying public. Although most everyday web surfers will probably not notice the difference, gamers and multimedia users are likely to find that the next generation of gadgets that have octa-core processors offer an experience on a par with their desktop cousins.
The MT6592 is expected to appear in Android 4.4 Kitkat devices in early 2014, though as yet no manufacturers have announced that they will be using it in their products.
vBulletin has been compromised, leading to the theft of customer password data that has raised concerns that there is a critical vulnerability threatening websites running the program.
vBulletin, a proprietary internet forum software package that runs the forums for popular websites such as Macrumors and Ubuntu, announced in a blog post on Friday that its security team discovered sophisticated attacks on its network involving illegal access to forum user information.
“Our investigation currently indicates that the attackers accessed customer IDs and encrypted passwords on our systems. We have taken the precaution of resetting your account password,” vBulletin Technical Support lead Wayne Luke wrote.
The acknowledgement arrived just a few days after Macrumours admitted that a security breach had led to the exposure of hashed passwords for over 860,000 users. At the time, Macrumors editorial director Arnold Kim wrote in a short advisory that the attack resembled the attack on Ubuntu user forums in July.
Suspicions were realised when members of hacker team Inj3ct0r published a Facebook post claiming that they were responsible for the attacks on both vBulletin and Macrumours.
The Inj3ct0r Team members said they breached the vBulletin website by exploiting a previously undocumented vulnerability in the vBulletin software. They then used this privileged access to obtain login credentials for the Macrumors moderator account. After logging in to the account, they stole the password hashes for 860,106 Macrumors accounts.
“Inj3ct0r Team hacked vBulletin.com and Macrumors.com. Inj3ct0r Team hacked the big CMS vendor vBulletin.com. We got shell, database and root server,” the post read.
“We wanted to prove that nothing in this world is not safe. We found a critical vulnerability in vBulletin all versions 4.x.x and 5.х.x. We’ve got upload shell in vBulletin server, download database and got root.”
vBulletin has yet to respond to our request for comment regarding the claimed zero-day attack on its network.
However, once word got out that there might be a critical vulnerability in the forum software, user forums for the Defcon hacker conference were temporarily shut on Sunday evening. The forum’s landing page now reads, “We have disabled the forums until there is resolution on a possible vulnerability. Once we have a fix/patch installed, we’ll re-open service.”
The Inj3ct0r Facebook post gives the option for forum owners to buy a patch to fix the vulnerability they exploited, with a link that directs them to the team’s website for “7000 gold”, a currency that we can only imagine derives from the underground.
Continuing the trend for announcing that your dot com startup is worth a horrific amount of money, remote access filesharing service Dropbox has announced a new round of venture capital funding applications that values it at an extremely ambitious $8bn.
Dropbox, which recently refined its service to allow a more obvious divide between work and home files, has announced that it is seeking $250 million in venture capital over the next few weeks, which would put the valuation at double that of its previous estimate in 2011, which valued it at a still hefty $4bn.
Despite coming under heavy competition from rival services including Box and Sugarsync, as well as big players like Amazon and Google, Dropbox has managed to not only to hold its own, but become an eponym for most of its competitors.
The news follows Twitter’s recent IPO and the revelation that nudie selfie app Snapchat turned down offers of up to $4bn for the service from Google and Facebook.
In the middle of one of the biggest economic downturns since the 1930s, information technology firms have indulged in escalating acts of corporate chest puffing that many fear might become a second dot come bubble, ripe for bursting in a similar fashion to events following the millennium.
In the meantime, as Dropbox recently claimed to have 200 million users and is introducing both remote wiping and split “storage lockers” in an attempt to attract big business, it is clear that the company has raised its ambitions.