“Both Sony and Microsoft have said games can be resold and that’s exactly what we anticipated. It’s a recognised way to make these games more affordable. All three new platforms understand that,” Bartel told Forbes.
“As people upgraded to PS3 they traded in their old systems and libraries, which is why Sony made the move to not support backwards compatibility with later iterations of PS3. That’s why the ‘buy, sell, trade’ model works well. It enables people to purchase new games by trading in their old ones. We expect to see the same thing with this transition for PS4 and Xbox One. Trade-ins allow for a seamless transition.”
He added that 70 per cent of the $1 billion that GameStop brings to the market goes to new game sales.
After the Xbox One reveal yesterday there was still some confusion about how the machine’s internet requirements would affect the sharing and resale of games, leaving Microsoft executives to clarify the details.
It appears that the Ouya is going to be a bit delayed.
This is good news though, as it is being delayed because the console developers have more cash to spend on it, $15m more to be precise.
Ouya already raised around $7m on Kickstarter, and now, when it should be taking its last steps towards completion, it has had almost twice as much more injected into it by lovely venture capitalists.
We were expecting the console in early June, but that has slid back to 25 June. The time and money will in part be used to solve an issue with sticky buttons, something that usually only happens once consumers have taken some hardware home with them.
The money comes from venture capital firms and other companies including Kleiner Perkins Caufield & Byers (KPCB), Nvidia, Shasta Ventures, and Occam Partners. KPCB’s general partner Bing Gordon will join the Ouya board of directors as a result.
“We want Ouya to be here for a long time to come,” said Julie Uhrman, Ouya founder and CEO.
“The message is clear: people want Ouya. We first heard this from Kickstarter backers who provided more than $8 million to help us build Ouya, then from over 12,000 developers who have registered to make an Ouya game, next from retailers who are carrying Ouya online and soon on store shelves, and now from top pioneering investors.”
Gordon is in charge of digital investments at KPCB and is a veteran of the games industry, having started at Electronic Arts in 1982.
“Ouya’s open source platform creates a new world of opportunity for established and emerging independent game creators and gamers alike,” he said.
“There are some types of games that can only be experienced on a TV, and Ouya is squarely focused on bringing back the living room gaming experience. Ouya will allow game developers to unleash their most creative ideas and satisfy gamers craving a new kind of experience.”
Ouya consoles should start arriving in living rooms on 25 June. If you want one, you are going to have to come up with around $100 dollars, plus another $50 dollars if you want two controllers.
It’s ancient history now, but once upon a time, if you wanted to play the most recent and most interesting games, you had to get up, leave the house and make your way to an arcade. Games consoles and home computers lived further down the food chain, their owners waiting for often sub-par versions of glorious arcade hits to be released on home systems. The real experience happened in an arcade.
Even to those who experienced that era, it’s a little hard to believe when you look at the sad remnants of their former glory which remain. Even in supposedly arcade-mad Japan, games generally find themselves wedged ignominiously in between gambling machines occupied by middle-aged chain-smokers and UFO Catcher booths promising, but rarely delivering, stuffed toys and sweets for bored teens on dates. In western countries, sad, lonely fighting game machines are just stuffed in where “arcade” owners ran out of fruit machines to install.
The reasons for this change are fundamentally technological. Arcade machines are big, bulky and expensive to move or replace. Once, that meant that they were vastly more powerful than home systems – but the accelerating pace of technological progress turned the size and expense of arcade machines into a liability rather than an advantage. Cheap, rapidly updated computers and consoles (and eventually even phones) first matched and then far outstripped the processing capabilities of big arcade cabinets. Rapid updates in graphics, processing, storage, networking, controls and screen resolutions were comfortably adopted by the home market, the costs buffered by cheap, cheerful hardware and absorbed by the wallets of millions of consumers. Arcade operators, faced with replacing large numbers of huge, expensive systems in order to keep track of such changes, fell behind completely.
Social factors either exacerbated or softened this blow, but these were highly region dependent. In Japan, where small family living spaces have engendered a culture in which many social activities are carried out external to the home, arcades persisted as date spots, as places to hang out with friends and – perhaps most importantly – as a venue for games too large, too noisy or too intrusive to be played in a small family home. In parts of the West, though, social factors intervened to hasten the decline, with a perception of arcades as “seedy” venues (in the grand tradition of pool halls and their ilk) discouraging many potential players, while regions with legalised gambling were quick to drop videogames in favour of more profitable slot machines.
Over the years, there has been talk of an “arcade renaissance” on several occasions, yet each time has ended in disappointment. Even as living spaces in many Western countries (the UK is a particularly notable example) have shrunk dramatically in terms of average size, Western consumers have demonstrated a continued willingness to engage with loud, bulky games. Rock Band and Guitar Hero were hugely successful as home games in the West, where their Japanese equivalents, Konami’s Guitar Freaks or Drum Mania, have acted as sustaining lifeblood for arcade venues. It’s also notable that even as Japanese arcades have innovated and invested, launching extraordinary new games which leverage all sorts of new technologies, from the company’s ultra high-speed broadband networks through to the possibilities of RFID enabled cards, the arcade sector’s health has still declined – a drop-off in footfall, revenue and floor space that’s been slower than in the West, but still isn’t exactly the rude health you might have come to believe from fawning articles about amazing Japanese arcades in the western media.
As such, it’s important to be cautious about any notion of an arcade recovery. Yet if we were to envisage any potential uplift in the fortunes of the out-of-home gaming sector, we can easily say what one key factor would be – just as in the heyday of the arcade, these venues would need to provide games which you simply cannot experience at home. This won’t come about, this time around, through more powerful graphics or processing – the trends in those areas are focused on miniaturisation and cost-efficiency, targeting the ability to put high-end 3D into phones rather than building pricey, bulky, ultra high-end systems. Instead, the focus would have to be on experiences that don’t work at home for reasons of space, budget, intrusiveness – or preferably, a combination of all of the above.
The reason I raise this issue now is because in the past few weeks, most of us will have seen videos or demonstrations of technologies which, although their creators purport to be focused on the home market, clearly fall into these categories. One is Microsoft’s Illumiroom system, which uses Kinect to map a 3D space and then projects imagery matched to that 3D map. It’s a great piece of technology with extraordinary gaming potential. It’s also abjectly unsuited to an ever-increasing number of living rooms around the world. Kinect alone is an impossibility for many players due to the space and room layout it demands; Illumiroom, demanding similar space if not more and intrusively taking over the entire room such that nobody else will be able to use it concurrently with the game being played, is simply not going to work for most people and most homes. Outside the home, though, in a dedicated venue? The potential of the technology is extraordinary, the experiences it could create serving to create a destination for gamers to experience something that just won’t work at home.
The same thought process applies, to some extent, to the Oculus Rift. It’s not that the superb VR headset hardware won’t work at home – of course it will, and it’ll probably only be a few hardware generations before the compromises presently being made in the name of cost are ironed out by technological progress. However, the “full” VR experience – with a custom controller (a gun, perhaps, or full-body motion sensing suite), a multi- directional treadmill, and so on, is simply going to be too expensive for most users – and even if prices collapsed, it’s too big and unwieldy to live in most people’s apartments. Yet the entertainment potential of such a fully-functional setup, running in parallel with a dozen other such suites so that a group of friends can explore a virtual world together, is enormous – and from a commercial perspective, not even all that space-consuming.
Of course, technology is just one factor. Technologies such as these (and I’m sure that others exist which also fall into the trap of “amazing, but it won’t work in my house”) can give a compelling reason for people to engage with out-of-home gaming – but the social factors also have to be right if an arcade renaissance is to be possible. Social factors are trickier, in many ways, than getting the hardware and the software right. Losing the seedy, unwelcoming image of the arcade in some regions will be tough; in others, where arcades have died entirely, the marketing of an entirely new social pursuit would present a major challenge. Getting people to try out something like this might be easy; getting them to see a trip to the VR centre with friends as an entertainment option on par with a trip to the cinema is likely to be much harder.
All the same, the entertainment possibilities opened up by technologies of this kind, which are now reaching a mature, usable stage in their development, ought to create an optimism around arcades and out-of-home gaming that hasn’t been seen for some time. Social or commercial aspects could still pull the rug out from any hope of recovery or renaissance – but the potential certainly exists for new kinds of gaming and interactive entertainment to take their place as key social out-of-home experiences in the coming years.
Some well-known industry analysts are suggesting that Microsoft could be behind as much as six months on software development for the Xbox Next. According to these sources, a combination of events have put Microsoft in this position, but it seems that some titles that were being developed internally have been canned. The situation led to Microsoft seeking to secure exclusives from 3rd party sources to fill in the gaps.
We first suggested a link between EA and Microsoft on some sort of an exclusive deal back when they were not a part of the Sony press conference earlier this year. Now, we find that they have a deal of some sort for the new Respawn title, which will apparently be exclusive to the Xbox 360 and Xbox Next. That’s not all, as it is expected that Microsoft has more exclusives to announce. What the question is really about is whether these are true exclusives or are just timed exclusives that we will see on the PS3/PS4 at some point in the future.
Even if Microsoft’s internal exclusives lack for the Xbox Next at launch, we expect them to catch up; we don’t see a big gap developing, but we know that Microsoft has solid properties to use on the Xbox Next and they will get those titles developed and out. No worries: it is going to be similar to all console launches where the software lacks when the system is released.
AMD has said the memory architecture in its heterogeneous system architecture (HSA) will move management of CPU and GPU memory coherency from the developer’s hands down to the hardware.
While AMD has been churning out accelerated processing units (APUs) for the best part of two years now, the firm’s HSA is the technology that will really enable developers to make use of the GPU. The firm revealed some details of the memory architecture that will form one of the key parts of HSA and said that data coherency will be handled by the hardware rather than software developers.
AMD’s HSA chips, the first of which will be Kaveri, will allow both the CPU and GPU to access system memory directly. The firm said that this will eliminate the need to copy data to the GPU, an operation that adds significant latency and can wipe out any gains in performance from GPU parallel processing.
According to AMD, the memory architecture that it calls HUMA – heterogeneous unified memory access, a play on unified memory access – will handle concurrency between the CPU and GPU at the silicon level. AMD corporate fellow Phil Rogers said that developers should not have to worry about whether the CPU or GPU is accessing a particular memory address, and similarly he claimed that operating system vendors prefer that memory concurrency be handled at the silicon level.
Rogers also talked up the ability of the GPU to take page faults and that HUMA will allow GPUs to use memory pointers, in the same way that CPUs dereference pointers to access memory. He said that the CPU will be able to pass a memory pointer to the GPU, in the same way that a programmer may pass a pointer between threads running on a CPU.
AMD has said that its first HSA-compliant chip codenamed Kaveri will tip up later this year. While AMD’s decision to give GPUs access to DDR3 memory will mean lower bandwidth than GPGPU accelerators that make use of GDDR5 memory, the ability to address hundreds of gigabytes of RAM will interest a great many developers. AMD hopes that they will pick up the Kaveri chip to see just what is possible.
Nvidia will buy back $1bn worth of shares over the course of its financial year in a bid to return money to investors.
In a bid to please investors, Nvidia $100m of the buy-back will be during its current quarter.
Nvidia has being paying out a dividend to investors, though it is hardly something to shout about. The firm said part of its total $1.2bn return to shareholders will include a $0.075 per share dividend, which amounts to $50m a quarter.
“Nvidia’s strategies are gaining traction in the market and make us confident in our ability to continue generating cash. We are now broadening our program of giving back cash to our shareholders and plan to return a further $1bn by the end of this fiscal year,” he said.
Nvidia has been reaping the rewards of its Tesla and Tegra businesses, with revenues increasing and very healthy gross margins that sit comfortably above 50 percent.
Nvidia CEO Jen-Hsun Huang revealed at the GPU Technology Conference last month that the firm will invest heavily in developing its Tegra system on chip (SoC), telling analysts it represents the company’s largest revenue generation potential.
However, like many semiconductor firms, Nvidia’s share price has stagnated recently. It pays a relatively poor dividend, especially in comparison with rivals like Intel. The firm’s decision to return $1.2bn to investors is a move to keep its share price steady and give investors a reason to have more confidence in the company.
Nvidia said it will provide further details of its share buyback program when it releases first quarter fiscal 2014 financial results.
GlobalFoundries has announced that it demonstrated chip stacking using through-silicon vias (TSV) on its 20nm process node.
Spun off from AMD and perhaps best known for fabbing AMD’s processors, Globalfoundries has been investigating TSVs as a method of stacking chips, much like rival chip foundry TSMC. Now the firm just announced that it has demonstrated working 20nm silicon that incorporates TSVs.
According to Globalfoundries, it inserts the TSVs after the Front End of the Line flow and before the Back End of the Line flow, meaning it can use copper for the TSV fill material. The firm claims it developed its own contact protection scheme in order to make TSVs possible during its transition from 28nm to 20nm.
AMD has already said that it is waiting for the move to 20nm, presumably with Globalfoundries, however it is likely that fabbing TSVs at 20nm will take longer. At the GPU Technology Conference Nvidia announced that it will use TSVs to stack DRAM on its upcoming Volta GPUs.
David McCann, VP of packaging R&D at Globalfoundries said, “Our industry has been talking about the promise of 3D chip stacking for years, but this development is another sign that the promise will soon be a reality.
“Our next step is to leverage Fab 8′s advanced TSV capabilities in conjunction with our OSAT [outsourced semiconductor assembly and test] partners to assemble and qualify 3D test vehicles for our open supply chain model, providing customers with the flexibility to choose their preferred back-end supply chain.”
Globalfoundries didn’t say when its 20nm TSV process will be ready for volume production, but the firm’s announcement that it has working silicon might be tempting for potential customers.
AMD claims that the delay in transitioning from 28nm to 20nm highlights the beginning of the end for Moore’s Law.
AMD was one of the first consumer semiconductor vendors to make use of TSMC’s 28nm process node with its Radeon HD 7000 series graphics cards, but like every chip vendor it is looking to future process nodes to help it increase performance. The firm told said the time taken to transition to 20nm signals the beginning of the end for Moore’s Law.
Famed Intel co-founder and electronics engineer Gordon Moore predicted that total the number of transistors would double every two years. He also predicted that the ‘law’ would not continue to apply for as long as it has. It was professor Carver Mead at Caltech that coined the term Moore’s Law, and now one of Mead’s students, John Gustafson, chief graphics product architect at AMD, has said that Moore’s Law is ending because it actually refers to a doubling of transistors that are economically viable to produce.
Gustafson said, “You can see how Moore’s Law is slowing down. The original statement of Moore’s Law is the number of transistors that is more economical to produce will double every two years. It has become warped into all these other forms but that is what he originally said.”
According to Gustafson, the transistor density afforded by a process node defines the chip’s economic viability. He said, “We [AMD] want to also look for the sweet spot, because if you print too few transistors your chip will cost too much per transistor and if you put too many it will cost too much per transistor. We’ve been waiting for that transistion from 28nm to 20nm to happen and it’s taking longer than Moore’s Law would have predicted.”
Gustafson was pretty clear in his view of transistor density, saying, “I’m saying you are seeing the beginning of the end of Moore’s law.”
AMD isn’t the only chip vendor looking to move to smaller process nodes and has to wait on TSMC and Globalfoundries before it can make the move. Even Intel, with its three year process node advantage over the industry is having problems justifying the cost of its manufacturing business to investors, so it could be the economics rather than the engineering that puts an end to Moore’s Law.
Nvidia announced five graphics processing units (GPUs) for notebook computers today.
These mobile GPU models aim to conserve battery life with three technologies that run in the background and increase performance.
Nvidia’s Geforce GT 720M and 735M make up the mainstream segment of the announcement with their focus on budget and mid-range notebooks, while Nvidia’s GT 740M, 745M, and 750M are the performance part of the lineup and are intended for users that demand more power.
The chip designer said that all five of these notebook GPUs will come with its GPU Boost 2.0 technology that adjusts the chips’ clock speed to maximise graphics performance.
Two other power enhancing features built into the GPU lineup include Nvidia’s Optimus technology that enables longer battery life by switching the GPU on and off so it runs only when needed, and its Geforce Experience software, which adjusts in-game settings for the best possible performance and visual quality specific to a user’s notebook specifications. This feature also automatically keeps chip drivers up to date.
Nvidia said the new GPUs are available from today. According to the chip designer, all leading notebook manufacturers including Acer, Asus, Dell, HP, Lenovo, MSI, Samsung, Sony and Toshiba, will be introducing these GPUs. We predict that notebook release dates will be in the latter half of this year.
Nvidia held its GPU technology Conference (GTC) in San Jose, California two weeks ago, where the firm admitted that physically large GPUs have difficulty passing verification.
With Nvidia’s Volta GPU architecture stacking DRAM on the same silicon substrate as the GPU, the firm said that this will require it to increase the size of its chips.
Ouya, the open Android-based console designed by Yves Behar, is being shipped to its Kickstarter backers today, and the company officially announced this week at GDC that it will hit retailers in the US, UK and Canada on June 4. Ouya is promising “hundreds” of titles for the June 4 release and the $99 console will be available at Amazon, Best Buy, GAME, GameStop, Target, and the store on OUYA.tv. Additional controllers will be sold for $49.99. And for digital purchases, consumers will be able to get pre-paid cards with redeemable codes at retail if they wish.
The company said that over 8,000 game developers worldwide are currently developing games, including both up-and-comers and more well known game makers like Square Enix, Double Fine Productions, Tripwire Interactive, Vlambeer, Phil Fish’s Polytron Corporation, and Kim Swift’s Airtight Games. “The majority of devs so far are experienced devs who’ve never built an Android game before. About 1 out of 5 have never even built a game before,” Ouya CEO Julie Uhrman said that at the GDC unveiling. She boasted that Ouya “already has more titles a couple months before launch than any console has ever launched with.”
The Ouya hardware itself is even smaller than we had previously thought (think Rubik’s Cube or smaller), and its sleek design and brushed aluminum is pleasing to the eye. Uhrman, however, stressed the controller more than anything else. “What we spent the most amount of time on is the controller. We really want this to be our love letter to gamers,” she said, adding that Ouya focused on the ergonomics, the weight, the feel, and wanted it to be a precise, accurate controller. “This is one of the pieces of Ouya that evolved a lot based on early supporter feedback,” she continued.
Apparently, the feedback led to numerous changes on the controller in terms of button placement, and the style of d-pad. The team found out that many preferred a cross-style d-pad than a disc because it’s superior for fighting games. Also, the engineers retooled the tension of the analogs and the design of shoulder buttons. And Ouya even made the responsiveness and speed of the center touch pad customizable. In this journalist’s hands, it felt comfortable and familiar while playing a few titles.
After showing off the hardware, Uhrman dived into the user interface of Ouya. The whole UI is incredibly streamlined, with four categories and an apps-like layout. The four categories are Play, Discover, Make, and Manage (which is for settings). Play is simply where anything you’ve downloaded – games or music or video apps – will be placed. Discover is the store, and it’s been designed to encourage people to “find the best games.” For example, sub-selections in Discover include featured channels like Go Retro, Hear Me, Genres, and Sandbox. The plan is to offer more descriptive names for games within genres.
“The way games get exposed in the genre list is based on what we call the O-rank, which is our fun algorithm. It’s how we rank great games. A lot of app platforms today use downloads as a metric or they use revenue as a metric and we don’t think that’s a good way to say if it’s a good game,” Uhrman said. “You could download a game and never play it again. And with the free-to-try model, revenue isn’t necessarily the best model either. What is [a good metric] is what proves that the game is fun, and that’s engagement. So things like how long you have played a game, how many times you’ve played that game over a certain period of time. How quickly from the time you boot up Ouya, which is an always-on device, do you play that game… It’s those types of engagement metrics that we think prove it’s a fun game.”
Another interesting area within Discover is Sandbox, which offers developers an opportunity to put builds up and ask people to thumb it up. The idea is for great games to get out of the Sandbox and be searchable and merchandized. It encourages developers to market their games and promote them to fans. Once you get out of Sandbox you know the people next to you have great quality games, Uhrman explained.
The Make channel is an area that appears to still be in flux. Uhrman said the goal is to serve two audiences, gamers and developers, equally. While Make is a place where a developer can upload early builds, over time it’ll be a place for devs to communicate with fans. “We also can grow it to be, what if you want to make a game, here’s how to market a game, etc. We’ll look to devs and gamers for feedback on how to evolve the section,” Uhrman said.
A console that’s as open as Ouya should have a fairly simple submission process for developers right? Uhrman confirmed that it’s not overly complicated and should be something most can complete within an hour. “It’s something we thought a lot about given that we’re an open platform… but we wanted to make sure that there are good quality games, at least to the extent that it was optimized to the television and for the controller. So the guidelines isn’t necessarily a quality review, but it checks if there’s malware, does it break or freeze often, does it use our controller schema in the right way, we need to make sure there’s no IP infringement, no pornography, does it elicit real-world violence, you are who you say you are kind of thing – that’s the review. We try to keep it under an hour. Developers can choose to go live immediately or they can choose a certain time,” she detailed.
Curiously, there’s been no partnership reached with the ESRB to rate the games in North America. Right now, the games will be self-rated by devs and community reviewed. Given that Ouya is being sold in mainstream retail, however, we do have to wonder if this will pose potential problems for the company in an atmosphere where some people are still pointing fingers at violent video games. “We’ll take it as it comes; right now we want to expose great content from any type of developer and we do have the thumbs-up/like feature or the report if this is abuse on the system,” responded Uhrman, adding that “We basically say that we can change the rules at any time and we can reject the game for any reason that doesn’t fit our content guidelines – we want everybody on Ouya to have a great experience.”
Ratings aside, one of the big questions surrounding Ouya is whether or not it can truly carve out a market for itself in the console space as industry veterans Sony and Microsoft prepare to launch their respective next-generation systems. The games we saw on Ouya are not graphically intense and are very indie in nature. Can Ouya handle high fidelity triple-A releases? Or does it even need to in order to get noticed?
Ouya does has a partnership with OnLive, so that’s one way to get triple-A games. “That’s one solution. We also support 1080p, hi-def… and we have a USB port so someone can add an external hard drive, so for games that are heavy you could absolutely use that. We have a max download size of 1.2GB for the first download, but as a developer if you want to add and send additional content from your servers you can,” Uhrman said.
“Traditional games take longer to develop, and we have some of those in development that we’re really excited about. Ouya is not about the number of polygons on the screen,” Uhrman acknowledged. “That’s not where we went. We wanted to have innovative and creative exclusive content, and we’re already starting to see that.”
Exclusive content plus a very appealing $99 price point is what could make the system an easy impulse buy for many gamers Uhrman believes. Moreover, Uhrman noted that most core gamers tend to purchase more than one console, so Ouya is likely to be something they’ll want to buy even if they are getting a PS4.
“Ouya offers something different; every gamer has a different expectation depending upon the platform and we believe we’re going to have innovative, creative games and exclusive games to Ouya… And the barrier to entry at just $99 where every game is free-to-try, I think opens up the opportunity for a number of gamers, even core gamers. Core gamers on average own more than one console. We don’t really think it’s an either/or situation. We’re offering something different – I think they’re going to want Ouya too,” she said.
A number of traditional consoles in the past have launched selling at a loss. Since Ouya is built with off the shelf components, it may be easier to contain costs, but Uhrman wouldn’t confirm that each unit is sold at a profit. “We’re really comfortable with our business model,” is all she would say.
That said, if things go the way Uhrman would like, this is only the beginning. Ouya will continue to evolve its software and hardware, and the hardware is likely to get refreshed quickly.
“We’re like any other software platform that iterates and grows over time, and we’ll have a hardware refresh rate more similar to a mobile refresh rate than a console refresh rate because we want to take advantage of the best chips out there and falling commodity prices. We will certainly make sure that there’s enough content that’s optimized for that chip and we don’t push on higher prices to the consumer,” she said.
Does that mean some Ouyas in future will not be compatible with certain games? Uhrman is looking to avoid that scenario. “We have a plan where all content will be compatible with future Ouya systems; we don’t want to fragment our own market for developers, and we always want gamers to have a great experience,” she commented.
Ouya will be interesting to watch. It’s a bold move for the industry and everything we’ve seen so far is completely unconventional. Whether or not that will pay dividends in the long-run is hard to judge at this point in time. “The market is calling us the ‘un-console’ and we like doing things the ‘un-way’,” Uhrman remarked.
Warhammer 40K owner Games Workshop has confirmed a new licensing deal with Roadhouse Interactive to develop new titles for mobile space based on the franchise. The developer, who is based in Vancouver, describes the new Warhammer title as a side screening action game.
While Roadhouse confirms that the game is in development, the end mobile platforms that will see the released version of the game are still up in the air at the moment; but more information is sure to be coming in the months ahead, according to the studio.
The Warhammer 40K has had others attempts to capture the tabletop war game in video form before. These Warhammer offerings have met with mixed reviews, but this new title from Roadhouse will be a first for Warhammer 40K in the mobile space.
Sony has inked a deal with game engine development company Unity to bring a cross-platform engine to its upcoming games console, the Playstation 4 (PS4).
Unity’s game engine is designed to run on a multiplicity of devices and is capable of scaling up to high end systems with powerful graphics cards, while being able to run on tablets, smartphones and games consoles.
However, Sony’s partnership with the firm now extends Unity’s engine to the PS4 and is set to bring a host of tools designed to make it easier for developers to write games for the console.
Unity announced the strategic partnership with Sony Computer Entertainment, which was inked on 15 March, on Thursday. For Unity, the partnership signifies a growing interest in its cross-platform game engine, while for Sony it allows developers to port their existing titles more quickly to the Playstation 4 games console.
Though Unity said that work is “still in early stages”, it is looking to roll out these tools this autumn.
“[I]t’s good to keep in mind that the different tools will have different schedules,” Unity’s CEO David Helgason said in a blog post.
Helgason said Sony is focused on bringing the most creative studios “with an emphasis on independent developers” to its console.
Meanwhile, Sony is revving up for its PS4 launch. Last month, at an event in America it ‘launched’ the PS4, but only showed off its controller. The PS4 games console itself is now all we are waiting to see.
The Japanese firm teased its fanbase with a picture of something that resembled an egg on Thursday, but no one knows whether this contains the PS4 or a smartphone.
Placing our bet, were are going to say that Sony will open the egg at Easter and show us its PS4 games console. Or perhaps we are just being optimistic.
Video game research firm EEDAR, which already has a proprietary database of over 100 million internally researched data points from more than 90,000 physical, digital, mobile, and social game products, is gearing up for the launch of a new service to assist mobile and social developers. EEDAR said that its new suite of mobile. Tablet and social products will aim “to improve sales potential and game quality for titles utilizing in-app monetization.”
EEDAR said that one of the most important things a developer can do is to optimize a game before launch. “EEDAR is able to provide an assessment at any point during the development cycle and accurately project key performance measurements of the final product, in addition to a qualitative assessment that provides feedback from the perspective of a professional game critic and consumers,” the company said about its new product suite.
Jesse Divnich, VP of Insights at EEDAR, to get an overview of the key takeaways from the firm’s research on the mobile and social markets. Divnich stressed that developers must be prepared with their in-game monetization strategy for retention and boosting conversion rates before a title is released into an app store.
“When the mobile game market was emerging, developers could optimize key monetization features after a game’s launch. The onboarding acquisition process had a long tail. Today, due to competition and larger consumer awareness, the time to peak engagement is rapidly shortening,” he noted.
“Facebook/Social games are a perfect example. Games like Farmville took nearly a year before they reached their peak users. It gave Zynga ample enough time to adjust game features to increase engagement monetization rates. Now, Social games are peaking within weeks and this idea of always being in ‘beta’ quickly shows its weaknesses when you are onboarding the majority of your lifetime users in only a few weeks,” he continued. “The mobile market is beginning to reach that point. Mobile games are making more headlines, consumers are becoming aware of hit titles faster. Simply put, consumers are engaging mobile games closer to a game’s release date and sleeper hits are becoming less prevalent.”
Even getting highlighted by Apple doesn’t mean what it used to. Developers can squander a great opportunity if they don’t make an effort to optimize. “Being featured by Apple no longer means weeks or months on the top charts. At most you have seven days and if your title is not fully optimized, you will leave money on the table,” Divnich added. “Going forward, developers must ensure they’re launching with maximum optimization, both from an artistic and scientific perspective. This means dedicating more resources to pre-launch analytics and qualitative testing.”
So what are some other notable mistakes developers are making? Well, mimicry certainly isn’t helping. Just because something works in one game doesn’t mean it can be successfully “borrowed” for a different game.
“There are still a large chunk of developers that are still too short-sighted. Clash of Clans has been a top seller for a few months and nearly 50 percent of the concepts and vertical slices that come across my desk in some way or another have an 80 percent overlap of Clash of Clans’ engagement loop. After we perform our assessments, some developers are disappointed to learn their retention, conversion, and monetization rates potential are a fraction of the results Clash of Clans has produced,” observed Divnich.
Even if your game is successful at the start, retention is a real problem, as it’s hard to create a game that has legs. “Competition within the mobile markets is at its fiercest, and every week there are at least seven high-quality releases trying to fight for our attention. The increase in competition, media coverage, and consumer awareness has driven down retention rates, for some genres, to dangerously low levels,” Divnich explained.
The key, he said, is to drive connectivity with a very attractive multiplayer component. “Right now, the tried and true method for improving retention has been multiplayer and social features. The correlation between retention rates and the inclusion of multiplayer and social features is ridiculously high,” Divnich noted. “We do issue caution, however. Just because games with strong multiplayer and social support sell well doesn’t mean slapping on a multiplayer component will automatically make your game a success.”
“We’ve seen this trend occur in the traditional HD gaming space. Call of Duty: Modern Warfare created a multiplayer frenzy and everyone thought by cuffing on a multiplayer component their game, too, would be a success. While it helped for some, those that tacked it on were met with lukewarm or disappointing reception. We still encourage our developers to implement new ways of approaching multiplayer and social features, but how they are implemented is key to improving retention rates,” he continued.
While the mobile/tablet space is getting all the attention these days, and social gaming on Facebook has seen sharp declines, that doesn’t mean developers should automatically ignore the social space. There can be opportunities there as well, especially if developers optimize their titles.
“The social platform is still viable and profitable for many developers,” Divnich remarked. “Two years ago developers were fanatic about releasing on the social platform, but they oversaturated the market. There was too much choice in a market, there were no switching barriers for consumers, and there existed too many rip-offs of the standard Farmville or Bejeweled engagement loop. Additionally, Facebook couldn’t keep up with the demand for innovation. Being a platform where consumers violently resist change (e.g. Timeline), it’s difficult to support new tools and back-end features for developers without changing the whole experience altogether.”
“Developers can still be profitable on social platforms, but we certainly approach that space more cautiously,” he concluded.
Richland is set to replace AMD’s Virgo platform, powered by Trinity processors, and this change will happen in June 2013, most likely coinciding with Computex 2013.
AMD has just launched the first batch of Richland mobile APUs and we still have to see some notebook designs hitting the market. We wrote about mobile Richland APUs.
As of late last year Desktop Richland was always set to launch in June 2013 and the fastest of them is the A10 6800K, clocked at 4.1GHz and 4.4 with Turbo. It also features Radeon HD 8670D graphics that run at 844 MHz. This is the fastest Richland part and it comes unlocked, ready to replace the current AMD A10 5800K. In Europe, the A10 5800K currently sells for 112, while in US the same CPU sells for $129.00 (boxed).
The alpha dog A10 6800K is followed by A10 6700, A8 6600K (Unlocked) and A8 6500. AMD has a mix of 100W and 65W quad-core Richland desktop SKUs. There will be a single A6 6400K (Unlocked) SKU and the A4 6300, both dual-cores with 65W TDP.
Production ready samples were churned out in late January, while volume production is scheduled for late March 2013. The announcement was always scheduled for June 2013 and Richland last through most of 2013, until Kaveri with 28nm Steamroller comes on line.
Intel is going to update its desktop Pentium family with several slightly faster Ivy Bridge-based processors.
According to CPU World the chips should hit the shops in the second quarter of 2013 which is a quarter after January’s refresh of budget desktop families, and one quarter before the launch of Haswell. The new chips have the original titles of Pentium G2030, G2030T, G2120T and G2140. They will have two cores, but lack Hyper-Threading technology, and can run two threads before getting all confused.
Both the G2000 and G2100 series CPUs support only basic features, like Intel 64 and Virtualization. They do integrate HD graphics which are clocked at 650 MHz and dual-channel memory controller, that supports DDR3-1333 on the G2030 and the G2030T, and up to DDR3-1600 on the G2120T and the G2140.
Pentium G2030T and G2120T are low-power models, replacing G2020T and G2100T but are clocked 100 MHz higher, that is at 2.6GHz and 2.7GHz respectively. However they still fit into 35 Watt thermal envelope. Pentium G2030 and G2140 mainstream microprocessors will be faster than “T” SKUs, and they will have 57 per cent higher TDP. Intel expects these to replace the G2020 and G2130 SKUs. The G2030 will run at 3 GHz. The G2140 will operate at 3.3 GHz. No word on prices yet.