Subscribe to:

Subscribe to :: TheGuruReview.net ::

IBM Funds Researchers Who Create KiloCore Processor

June 22, 2016 by Michael  
Filed under Computing

Researchers at the University of California, Davis, Department of Electrical and Computer Engineering have developed 1000-core processor which will eventually be put onto the commercial market.

The team, from t developed the energy-efficient 621 million transistor “KiloCore” chip so that it could manage 1.78 trillion instructions per second and since the project has IBM’s backing it could end up in the shops soon.

Team leader Bevan Baas, professor of electrical and computer engineering said that it could be the world’s first 1,000-processor chip and it is the highest clock-rate processor ever designed in a university.

While other multiple-processor chips have been created, none exceed about 300 processors. Most of those were created for research purposes and few are sold commercially. IBM, using its 32 nm CMOS technology, fabricated the KiloCore chip and could make a production run if required.

Because each processor is independently clocked, it can shut itself down to further save energy when not needed, said graduate student Brent Bohnenstiehl, who developed the principal architecture. Cores operate at an average maximum clock frequency of 1.78 GHz, and they transfer data directly to each other rather than using a pooled memory area that can become a bottleneck for data.

The 1,000 processors can execute 115 billion instructions per second while dissipating only 0.7 Watts which mean it can be powered by a single AA battery. The KiloCore chip executes instructions more than 100 times more efficiently than a modern laptop processor.

The processor is already adapted for wireless coding/decoding, video processing, encryption, and others involving large amounts of parallel data such as scientific data applications and datacentre work.

Courtesy-Fud

 

Was Last Weeks E3 A Success?

June 20, 2016 by Michael  
Filed under Gaming

E3 2016 has officially come to a close, and despite the fact that Activision and EA were absent from the show floor, my experience of the show was that it was actually quite vibrant and filled with plenty of intricate booth displays and compelling new games to play. The same cannot be said for the ESA’s first ever public satellite event, E3 Live, which took place next door at the LA Live complex. The ESA managed to give away 20,000 tickets in the first 24 hours after announcing the show in late May. But as the saying goes, you get what you pay for…

The fact that it was a free event, however, does not excuse just how poor this show really was. Fans were promised by ESA head Mike Gallagher in the show’s initial announcement “the chance to test-drive exciting new games, interact with some of their favorite developers, and be among the first in the world to enjoy groundbreaking game experiences.”

I spent maybe an hour there, and when I first arrived, I genuinely questioned whether I was in the right place. But to my disbelief, the small area (maybe the size of two tennis courts) was just filled with a few tents, barely any games, and a bunch of merchandise (t-shirts and the like) being marketed to attendees. The fans I spoke with felt like they had been duped. At least they didn’t pay for their tickets…

“When we found out it was the first public event, we thought, ‘Cool we can finally go to something E3 related’ because we don’t work for any of the companies and we’re not exhibitors, and I was excited for that but then we got here and we were like ‘Uh oh, is this it?’ So we got worried and we’re a little bit upset,” he continued. Malcolm added that he thought it was going to be in one of the buildings right in the middle of the LA Live complex, rather than a siphoned off section outside with tents.

As I walked around, it was the same story from attendees. Jose, who came with his son, felt similarly to Malcolm. “It’s not that big. I expected a lot of demos, but they only had the Lego Dimensions demo. I expected something bigger where we could play some of the big, upcoming titles. All it is is some demo area with Lego and some VR stuff,” he told me.

When I asked him if he got what he thought would be an E3 experience, he continued, “Not even close, this is really disappointing. It’s really small and it’s just here. I expected more, at least to play some more. And the VR, I’m not even interested in VR. Me and my son have an Xbox One and we wanted to play Battlefield 1 or Titanfall 2 and we didn’t get that opportunity. I was like c’mon man, I didn’t come here to buy stuff. I came here to enjoy games.”

By cobbling together such a poor experience for gamers, while 50,000 people enjoy the real E3 next door, organizers risk turning off the very audience that they should be welcoming into the show with open arms. As the major publishers told me this week, E3 is in a transitional period and needs to put players first. That’s why EA ultimately hosted its own event, EA Play. “We’re hoping the industry will shift towards players. This is where everything begins and ends for all of us,” said EA global publishing chief Laura Miele.

It seems like a no-brainer to start inviting the public, and that’s what we all thought was happening with E3 Live, but in reality they were invited to an atmosphere and an “experience” – one that barely contained games. The good news, as the quickly sold out E3 Live tickets indicated, is that there is a big demand for a public event. And it shouldn’t be very complicated to pull off. If the ESA sells tickets, rather than giving them away, they can generate a rather healthy revenue stream. Give fans an opportunity to check out the games for a couple days and let the real industry conduct its business on a separate 2-3 days. That way, the ESA will be serving both constituents and E3 will get a healthy boost. And beyond that, real professionals won’t have to worry anymore about getting shoved or trampled, which nearly happened to me when a legion of frenzied gamers literally all started running into West Hall as the show floor opened at 10AM. Many of these people are clearly not qualified and yet E3 allows them to register. It’s time to make E3 more public and more professional. It’s your move ESA.

We asked the ESA to provide comment on the reception to E3 Live but have not received a response. We’ll update this story if we get a statement.

Courtesy-GI.biz

 

Will AMD’s Naples Processor Have 32 Cores?

June 16, 2016 by Michael  
Filed under Computing

AMD’s Zen chip will have as much as 32 cores, 64 threads and more L3 cache than you can poke a stick at.

Codenamed Naples, the chip uses the Zen architecture. Each Zen core has its own dedicated 512kb cache. A cluster [shurely that should be cloister.ed] of Zen cores shares a 8MB L3 cache which makes the total amount of L3 shared cache 64MB. This is a big chip and of course there will be a 16 core variant.

This will be a 14nm FinFET product manufactured in GlobalFoundries and supporting the X86 instruction set. Naples has eight independent memory channels and up to 128 lanes of gen 3 PCIe.  This makes it suitable for fast NVMO memory controllers and drives. Naples also support up to 32 SATA or NVME drives.

If you like the fast network interface, Naples supports 16x10GbE and the controller is integrated, probably in the chipset. Naples is using SP3 LGA server socket.

The first Zen based server / enterprise products will range between a modest 35W TDP to a maximum of 180W TDP for the fastest ones.

There will be dual, quad, sixteen and thirty-two core server versions of Zen, arriving at different times. Most of them will launch in 2017 with a possibility of very late 2016 introduction.

It is another one of those Fudzilla told you so moments. We have already revealed a few Zen based products last year. The Zen chip with Greenland / Vega HBM2 powered GPU with HSA support will come too, but much later.

Lisa Su, AMD’s CEO  told Fudzilla that the desktop version will come first, followed by server, notebook and finally embedded. If that 40 percent IPC happens to be across more than just a single task, AMD has a chance of giving Intel a run for its money.

 

Courtesy-Fud

 

Is Something Bigger On The Horizon After Virtual Reality?

June 14, 2016 by Michael  
Filed under Gaming

This weeks E3 won’t be entirely dominated by VR, as some events over the past year have been; there’s too much interest in the prospect of new console hardware from all the major players and in the AAA line-up as this generation hits its stride for VR to grab all the headlines. Nonetheless, with both Rift and Vive on the market and PSVR building up to an autumn launch, VR is still likely to be the focus of a huge amount of attention and excitement at and around E3.

Part of that is because everyone is still waiting to see exactly what VR is going to be. We know the broad parameters of what the hardware is and what it can do – the earliest of early adopters even have their hands on it already – but the kind of experiences it will enable, the audiences it will reach and the way it will change the market are still totally unknown. The heightened interest in VR isn’t just because it’s exciting in its own right; it’s because it’s unknown, and because we all want to see the flashes of inspiration that will come to define the space.

One undercurrent to look out for at E3 is one that the most devoted fans of VR will be deeply unhappy with, but one which has been growing in strength and confidence in recent months. There’s a strong view among quite a few people in the industry (both in games and in the broader tech sector) that VR isn’t going to be an important sector in its own right. Rather, its importance will be as a stepping stone to the real holy grail – Augmented or Mixed Reality (AR / MR), a technology that’s a couple of years further down the line but which will, in this vision of the future, finally reach the mainstream consumer audience that VR will never attain.

The two technologies are related but, in practical usage, very different. VR removes the user from the physical world and immerses them entirely in a virtual world, taking over their visual senses entirely with closed, opaque goggles. AR, on the other hand, projects additional visual information onto transparent goggles or glasses; the user still sees the real world around them, but an AR headset adds an extra, virtual layer, ranging from something as simple as a heads-up display (Google’s ill-fated Glass was a somewhat clunky attempt at this) to something as complex as 3D objects that fit seamlessly into your reality, interacting realistically with the real objects in your field of vision. Secretive AR headset firm Magic Leap, which has raised $1.4 billion in funding but remains tight-lipped about its plans, prefers to divide the AR space into Augmented Reality (adding informational labels or heads-up display information to your vision) and Mixed Reality (which adds 3D objects that sit seamlessly alongside real objects in your environment).

The argument I’m hearing increasingly often is that while VR is exciting and interesting, it’s much too limited to ever be a mainstream consumer product – but the technology it has enabled and advanced is going to feed into the much bigger and more important AR revolution, which will change how we all interact with the world. It’s not what those who have committed huge resources to VR necessarily want to hear, but it’s a compelling argument, and one that’s worthy of consideration as we approach another week of VR hype.

The reasoning has two basis. The first is that VR isn’t going to become a mainstream consumer product any time soon, a conclusion based off a number of well-worn arguments that will be familiar to anyone who’s followed the VR resurgence and which have yet to receive a convincing rebuttal – other than an optimistic “wait and see”. The first is that VR simply doesn’t work well enough for a large enough proportion of the population for it to become a mainstream technology. Even with great frame-rate and lag-free movement tracking, some aspects of VR simply make it induce nausea and dizziness for a decent proportion of people. One theory is that it’s down to the fact that VR only emulates stereoscopic depth perception, i.e. the difference in the image perceived by each eye, and can’t emulate focal depth perception, i.e. the physical focusing of your eye on objects different distances from you; for some people the disparity between those two focusing mechanisms isn’t a problem, while for others, it makes them feel extremely sick.

Another theory is that it’s down to a proportion of the population getting nauseous from physical acceleration and movement not matching up with visual input, rather like getting motion sick in a car or bus. In fact, both of those things probably play a role; either way, the result is that a sizeable minority of people feel ill almost instantly when using VR headsets, and a rather more sizeable number feel dizzy and unwell after playing for extended periods of time. We won’t know just how sizeable the latter minority is until more people actually get a chance to play VR for extended periods; it’s worth bearing in mind once again that the actual VR experiences most people have had to date have been extremely short demos, on the order of 3 to 5 minutes long.

The second issue is simply a social one. VR is intrinsically designed around blocking out the world around you, and that limits the contexts in which it can be used. Being absorbed in a videogame while still aware of the world and the people around you is one thing; actually blocking out that world and those people is a fairly big step. In some contexts it simply won’t work at all; for others, we’re just going to have to wait and see how many consumers are actually willing to take that step on a regular basis, and your take on whether it’ll become a widespread, mainstream behaviour or not really is down to your optimism about the technology.

With AR, though, both of these problems are solved to some extent. You’re still viewing the real world, just with extra information in it, which ought to make the system far more usable even for those who experience motion sickness or nausea from VR (though I do wonder what happens regarding focal distance when some objects appear to be at a certain position in your visual field, yet exist at an entirely different focal distance from your eyes; perhaps that’s part of what Magic Leap’s secretive technology solves). Moreover, you’re not removed from the world any more than you would be when using a smartphone – you can still see and interact with the people and objects around you, while also interacting with virtual information. It may look a little bit odd in some situations, since you’ll be interacting with and looking at objects that don’t exist for other people, but that’s a far easier awkwardness to overcome than actually blocking off the entire physical world.

What’s perhaps more important than this, though, is what AR enables. VR lets us move into virtual worlds, sure; but AR will allow us to overlay vast amounts of data and virtual objects onto the real world, the world that actually matters and in which we actually live. One can think of AR as finally allowing the huge amounts of data we work with each day to break free of the confines of the screens in which they are presently trapped; both adding virtual objects to our environments, and tagging physical objects with virtual data, is a logical and perhaps inevitable evolution of the way we now work with data and communications.

While the first AR headsets will undoubtedly be a bit clunky (the narrow field of view of Microsoft’s Hololens effort being a rather off-putting example), the evolutionary path towards smaller, sleeker and more functional headsets is clear – and once they pass a tipping point of functionality, the question of “VR or AR” will be moot. VR is, at best, a technology that you dip into for entertainment for an hour here and there; AR, at its full potential, is something as transformative as PCs or smartphones, fundamentally changing how pretty much everyone interacts with technology and information on a constant, hourly, daily basis.

Of course, it’s not a zero sum game – far from it. The success of AR will probably be very good for VR in the long term; but if we see VR now as a stepping stone to the greater goal of AR, then we can imagine a future for VR itself only as a niche within AR. AR stands to replace and re imagine much of the technology we use today; VR will be one thing that AR hardware is capable of, perhaps, but one that appeals only to a select audience within the broad, almost universal adoption of AR-like technologies.

This is the vision of the future that’s being articulated more and more often by those who work most closely with these technologies – and while it won’t (and shouldn’t) dampen enthusiasm for VR in the short term, it’s worth bearing in mind that VR isn’t the end-point of technological evolution. It may, in fact, just be the starting point for something much bigger and more revolutionary – something that will impact the games and tech industries in a way even more profound than the introduction of smartphones.

Courtesy-GI.biz

 

Is AMD Outpacing nVidia In The Gaming Space?

June 14, 2016 by Michael  
Filed under Gaming

MKM analyst Ian Ing claims that AMD’s recent gaming refresh was better done than Nvidia’s.

Writing in a research report, Ing said that both GPU suppliers continue to benefit from strong core gaming plus emerging applications for new GPU processing.

However, AMD’s transition to the RX series from the R9 this month is proving smoother than Nvidia’s switch to Pascal architecture from Maxwell.

Nvidia is doing well from new GPU applications such as virtual reality and autonomous driving.

He said that pricing was holding despite a steady availability of SKUs from board manufacturers. Ing wrote that he expected a steeper ramp of RX availability compared to last year’s R9 launch, as the new architecture is lower-risk, given that HBM memory was implemented last year.

Ing upped his price target on Advanced Micro Devices stock to 5 from 4, and on Nvidia stock to 52 from 43. On the stock market today, AMD stock rose 0.9 per cent to 4.51. Nvidia climbed 0.2 per cent to 46.33.

Nvidia unveiled its new GeForce GTX 1080, using the Pascal architecture, on 27 May and while Maxwell inventory was running out, Nvidia customers were experiencing Pascal shortages.

“We would grow concerned if the present availability pattern persists in the coming weeks, which would imply supply issues/shortages,” Ing said.

Courtesy-Fud

 

Is E3 Still Relevant?

June 9, 2016 by Michael  
Filed under Gaming

What is the point of E3? I ask not in a snarky tone, but one of genuine curiosity, tinged with concern. I’m simply not sure what exactly the show’s organizers, the ESA, think E3 is for any more. Over the years, what was once by far the largest date in the industry’s annual calendar has stuck out in various new directions as it sought to remain relevant, but it’s always ended up falling back to the path of least resistance – the familiar halls of the Los Angeles Convention Center, the habitual routine of allowing only those who can prove some industry affiliation to attend. For all that the show’s organizers regularly tout minor tweaks to the formula as earth-shattering innovation, E3 today is pretty much exactly the same beast as it was when I first attended 15 years ago – and by that point, the show’s format was already well-established.

There’s one major difference, though; E3 today is smaller. It now struggles to fill the convention center’s halls, and a while back ditched the Kentia Hall – which for years promised the discovery of unknown gems to anyone willing to sift through its morass of terrible ideas. Kentia refugees now fill gaps in the cavernous South Hall’s floor plan, elevated to sit alongside a roster of the industry’s greats that gets more meagre with each passing year. This year, attendees at E3 will find it hard not to notice a number of key absences. The loss of Konami’s once huge booth was inevitable given the company’s U-turn away from console publishing, but the decisions of EA and Activision to pull out of the show this year will be felt far more keenly.

Hence the question; what’s the point? Who, or what, is E3 actually meant to be for? It’s not for consumers, of course – they’re not allowed in, in theory, though the ESA has come up with various pointlessly convoluted ways of letting a handful of them in anyway. It’s for business, yet big players in the industry seem deeply dissatisfied with it. It’s not just EA and Activision, either; even the companies who are actually exhibiting on the show floor seem to have taken to viewing it as an addendum to the actually important part of the week, namely their live-broadcast press conferences. Once the realm only of platform holders, now every major publisher has their own – and if EA and Activision’s decision to go their own way entirely, leaving the E3 show floor, has no major negative consequences for them this year, you can be damned sure others will question the show’s cost-value next year.

The problem is that the world has changed and E3 has not. Once, it was the only truly global event on the calendar; back then, London had ECTS and Tokyo had TGS, but there was no question of them truly challenging E3′s dominance. The world was a very different place back then, though. It was a time before streaming high resolution video, a time before the Internet both made the world a much smaller place, and made the hyper-local all the more relevant. Today, E3 sits in a landscape of events most of which, bluntly, justify their existence far better than the ESA’s effort does. Huge local events in major markets around the world serve their audiences better than a remote event in LA; GamesCom in Germany and TGS in Tokyo remain the biggest of those, but there are also major events in other European, Asian and Latin American countries that balance serving the business community in their regions with putting on a huge show for local consumers.

In the United States, meanwhile, E3 finds itself assailed on two sides. The PAX events have become the region’s go-to consumer shows, and a flotilla of smaller shows cater well to specific business and social niches. GDC, meanwhile, has become the de facto place to do business and for the industry to engage in conversation and debate with itself. The margin in between those two for a “showcase show that’s not actually for consumers but sort-of lets some in and is a place for the industry to do business but also please spend a fortune on a gigantic impressive stand” is an increasingly narrow piece of ground to stand on, and E3 is quite distinctly losing its balance.

A big part of the reason for that is simply that E3 has an identity crisis. It wants to be a global show in the age of the local, in an age where “global” is accomplished by pointing a camera at a stage, not by flying people from around the world to sit in the audience. It wants to be a spectacle, and a place to do business, and ends up being dissatisfying at both; it wants to excite and intrigue consumers, but it doesn’t want to let them in. The half-measures attempted over the years to square these circles have done nothing to convince anyone that E3 knows how to stay relevant; slackening ties to allow more consumers into the show simply annoys people who are there for work, and annoys the huge audience of consumers who remain excluded. The proposed consumer showcase satellite event, too, will simply annoy companies who have to divide their attention, and annoy consumers who still feel like they’re not being let in to the “real thing”. Meanwhile the show itself feels more and more like the hole in the middle of a doughnut – all these huge conferences, showcases and events are arranged around E3′s dates, but people spend less and less time at the show proper, and with EA and Activision go two of the major reasons to do so. (It’s also hard not to note, though I can’t quantify it in figures, that more industry people each year seem to stay home and watch the conferences online rather than travelling to LA.)

The answer to E3′s problems has to be an update to its objectives; it has to be for the ESA to sit down with its membership (including those who have already effectively abandoned the show) and figure out what the point of the show is, and what it’s meant to accomplish. The E3 brand has enormous cachet and appeal among consumers; it’s hard to believe that there’s no demand for a massive showcase event at the LA Convention Center that actually threw its doors open to consumers, it’s simply a question of whether ESA members think that’s something they’d like to participate in. From a business perspective, I think they’d be mad not to; the week of E3, loaded with conferences and announcements, drives the industry’s most devoted fans wild, and getting a few hundred thousand of them to pass through a show floor on that week would be one of the most powerful drivers of early sentiment and word of mouth imaginable.

As for business; it’s not like there isn’t a tried, tested and long-standing model for combining business and consumer shows that doesn’t involve a half-baked compromise. Tons of shows around the world, in games and in other fields, open for a couple of trade days before throwing the doors open to consumers over the weekend. Other approaches may also be valid, but the point is that there’s a simple and much more satisfying answer than the daft, weak-kneed reforms the ESA has attempted (“let’s let exhibitors give show passes to a certain number of super-fan consumers” – really? Really?).

E3 week remains a big deal; E3 itself may be faltering and a bit unloved, but the week around it is pretty much the only truly global showcase the industry has created for itself. That week deserves to be served by a better core event, rather than inexorably moving towards being a ton of media events orbiting a show nobody can really be bothered with. The organizers at the ESA need to be brave, bold and aggressive with what they do with E3 in future – because just falling back on the comfortable old format is starting to show diminishing returns at an alarming rate.

Courtesy-GI.biz

 

Is Apple Going With AMD’s Polaris GPU?

June 9, 2016 by Michael  
Filed under Computing

AMD has two GPUs which should make future Mac notebooks and PCs and both are based on Polaris 10 and 11 GPUs. Our original story yesterday smoked out a few more details from our industry sources.

The first chip which will probably head to Mac All-in-Ones or notebooks is a less than 40W TDP part MXM based GPU that has 10 Compute Units each with 64 compute cores. The total number of compute units is 640 and the card has 4GB RAM. A sub 40W GPU will give users about 1-1.25 Tera Flops which should be more than enough to power Apple displays. It might even have enough juice to play a casual game.

The memory works at 8 GBbps and the card has up to five Display Ports or other video connectors. With 14nm FinFET manufacturing, AMD / RTG can squeeze much more performance in the same power envelope compared to 28nm GCN cards. These R2x0 models are Apple’s current flavor.  Radeon R2x0 parts have been around since late 2013 and now is a good time to replace them with 14nm new GCN 4.0 architecture.

There will be a Polaris 10 based sub 150W TDP part that will be enough to replace the AMD Radeon R9 M395X with 4GB of GDDR5 memory that currently sits in 5K iMac. Such a card can have up to 8GB of memory and can easily drive the highest possible resolutions including the Apples famous Retina 5K display found on the 27 inch iMac. If you like numbers that is 5120×2880 resolution with shedloads of pixels.

It remains to see when Apple plans to release these cards. Nvidia did not get any of its products into 2016 Apple refresh product cycle and that product cycle should begin soon.

Courtesy-Fud

 

AMD Goes After Intel’s Skylake With Bristol Ridge

June 2, 2016 by Michael  
Filed under Computing

AMD has revealed the firm’s seventh-generation system-on-a-chip accelerated processing units (APUs).

Bristol Ridge and Stoney Ridge sound a little like locations in a Somerset version of Game of Thrones, but they both feature AMD’s Excavator x86 processor cores and Radeon R7 graphics, which AMD sees powering e-sports gaming on laptops.

Bristol Ridge is the more powerful of the two coming in 35W and 15W versions of AMD FX, A12 and A10 processors, offering up to 3.7GHz of processing power. The former two processors are paired with up to eight Graphics Core Next (GCN) cores in the R7 to provide a decent pool of graphics processing power.

Stoney Bridge offers less in the way of processor power, topping out at 3.5GHz, and versions include 15W A9, A6 and E2 processor configurations coupled with lower powered graphics accelerators.

AMD claimed that the new APUs offer a 50 per cent hike in performance over the previous generation Carrizo APUs. However, this rise is over APUs from the early part of Carrizo’s lifecycle, so performance gains over the most recent Carrizo APUs are likely to be 10 to 20 per cent.

AMD also said that its silicon is faster than rival chips from Intel, including the i3-6100U found in several ultraportable laptops.

Many of these tests are subjective and depend on how a hardware manufacture configures and sets up the APUs in a laptop or tablet, but AMD does have its graphics tech to draw on, such as the GCN architecture, which could give it the edge over Intel’s chips when it comes to pushing pixels.

The APUs will be aimed primarily at slim laptops that need low-power consumption chips, much like Intel’s Skylake line.

Bristol Ridge is currently available to end users only in the form of HP’s latest Envy laptop. But now that AMD has debuted the full range of the seventh-generation APUs we can expect to see them in other ultraportable machines before too long.

Courtesy-TheInq

 

Are Developers Responsible A Games Success?

May 23, 2016 by Michael  
Filed under Gaming

Orcs Must Die! Studio Robot Entertainment is a rare breed nowadays – in an age where you’re either indie or AAA, the Plano, Texas-based company (one of several Texas developers that rose from the ashes of Age of Empires studio Ensemble) has managed to succeed as a mid-sized outfit. When Robot was formed in 2009, the company operated on a small scale, but things really changed when it landed a major investment from Chinese media giant Tencent in 2014. That enabled Robot to scale up and to benefit from Tencent’s knowledge at the same time.

“We made the first Orcs Must Die! as a semi-indie studio. We were about 40-45 people. We’re about twice that size now. And we were able to do Orcs Must Die! and Orcs Must Die! 2 with that. We kind of kept following the franchise and following what the fans were asking for in that game and we knew the next version was going to be bigger. We had to make a strategic decision – were we going to stay small and try to do another small version of that game or did we want to be ambitious and try to do something a little bit bigger? And that was going to necessitate a different type of arrangement for us to find financing. Because, you know, just selling a $15 or $20 game on Steam over and over is tough to support a studio to make a bigger game,” Robot CEO Patrick Hudson told GamesIndustry.biz.

“We also did some licensing deals for this game. As an online game, we didn’t necessarily have an ambition of setting up a European publishing office or an Asian publishing office. So we went to Europe and we partnered up with GameForge and licensed the rights for them to publish the game for us. And that comes with some advances and license fees, which help us make the game. We did the same thing with Tencent in China and that led to an investment. So we are in that mid-space. I think you’re right that there are fewer people in that space right now. It would probably be harder for us to stay in that space if we didn’t have really strong partnerships with folks like GameForge and Tencent.”

Investments and partnerships can clearly make a difference to any game company, but it’s also easy to mismanage a studio’s growth. Before you know it, one department doesn’t know what the other is doing, and things spiral out of control.

“It’s all in how you manage it. You’re either afraid of that growth or you embrace it, put a process and structure in place to allow for that. There’s no question we have to run our studio differently at 90 people than we did at 45. There’s more structure in place, there are more layers of leadership to help the project along. We’ve done a decent job of managing the growth… We went through the same kind of growth curve at Ensemble and we actually spent a lot of time talking about what went well, what didn’t go well, ‘What did we learn from that experience that we could have managed the growth better, how do we apply that to Robot?’ So we try to be a little bit smarter about that. Talking to other friendly studios [helps also] – ‘Hey, what did you guys do through this kind of growth? What pains did you experience? What did you learn?’ So we’ll grow as much as it takes to support Orcs Must Die! or as little to support it,” Hudson continued.

While everyone was devastated when Microsoft seemingly shut down a successful Ensemble Studios for no good reason, Hudson takes it as a learning experience.

In Ensemble’s case, Hudson discovered that scale ultimately held back some of its better talent. “Age of Empires attracted a lot of really good game talent to the studio, either people who were starting fresh in the games industry and learned how to make great games inside of Ensemble or we recruited really talented people to Dallas to work on the Empires franchise and, ultimately, Halo Wars. So we had just a tremendous amount of pent up talent in what was not a huge studio. At its peak it was 120 people. So it was very densely populated with talent. When you’re a studio that size, you have a lead structure within each department, but not everybody gets a chance to take those leadership positions and do their own games. Once Ensemble went away, you saw all these talented people go off in different places and show what they were capable of,” he remarked.

Working at Ensemble instilled a certain level of dedication to quality in all the developers who worked there too. “We held ourselves to a really high standard of making games that everyone took with them to their next places. I would say, in addition to that… all of us worked for another six years for Microsoft post-acquisition, so we got to learn the industry as both indie developers and inside a publisher. We got to learn the entire space, how the whole ecosystem is close to the publishing side. So that was a very valuable experience that maybe a lot of other devs don’t get,” Hudson said.

There’s no animosity or regret about Ensemble either, as far as Hudson is concerned: “Six years is a long time to be with a company post-acquisition. It was actually, for the most part, six good years. Microsoft treated us well. I think we worked well with the people we worked with at Microsoft. You do see some [studios] that get acquired and they’re gone within a year or two. We didn’t have that experience. I kind of view six years as a nice success.”

Perhaps the greatest lesson that Hudson and Robot have learned, even before the rise of Kickstarter and Steam Early Access, is that listening and responding to a vibrant community is critical. Discoverability has become a nuisance to deal with, and you need the fans behind you in order to succeed. If you have expectations that a platform holder will feature you, your marketing strategy needs an overhaul.

“As some of those previous PC developers that came into mobile are now migrating back to PC, discoverability on PC has become not quite as bad as mobile, but it’s not easy. There’s a lot of content on Steam now. There’s no easy space. Games is more competitive and a harder business than it’s probably ever been. There’s just a lot of great developers out there making a lot of great content and there’s just no barriers to putting your content out there to players, and players move quickly from game to game. They’re going to seek the best content,” Hudson noted.

He continued, “When I talk to the Valve or Apple or Google folks, they know the problem. They see it. But it’s an almost impossible problem to solve… Everyone wants to be featured, right? It’s funny, when you talk to a new mobile developer and be like, ‘Hey, we’re gonna make this great game. We’re gonna be featured.’ Probably not. You’re probably not going to be featured. Unless you’re doing something really cool and innovative and very different that really shows off the platform.

“They all have different programs to try and help you get noticed but you can’t make that the core of your strategy. It’s really up to you to make a great game. If you don’t have a marketing budget to cultivate a community, start with a small community, really cultivate it and listen to them and speak to them and let them organically grow. It’s not the platform holder’s job to make it successful.”

Beyond building a robust community, selecting the right business model for your game is crucial. While free-to-play is almost the default option in today’s market, Hudson said that premium games are coming back too.

“We really do think of it as a case-by-case. There are interesting trends in the market where you’re seeing paid games come back in certain areas – even in China where we’re seeing an uptick in paid games, customers in China buying paid games. [That's] never happened before. So it’s really going to depend on the game, the needs of the game,” he commented.

For Orcs Must Die! Unchained, which just entered an open beta about a month ago, free-to-play just made sense for Robot, as it’s a big multiplayer MOBA-style tower defense game; Robot wants as many people online for matchmaking as possible. Hudson and Robot have tried free-to-play before with Hero Academy in 2012, but he fully admitted, “We made a ton of mistakes, we didn’t really know what we were doing. It was a very successful game critically. It probably should’ve been a little more successful for us commercially, but we learned those lessons and hopefully we’re applying some of those.

 

“[Unchained] will be our first big free-to-play PC title. And we get a lot out of our partners too. GameForge has been operating free-to-play titles forever. Tencent has been operating free-to-play titles forever and we really lean on their expertise and we ask them to be involved with us as we design the game. The nice thing about both of those partners is… monetization follows. They start with making a great game, get the players around, keep the players around, [and then] hopefully they’ll pay you down the road. But don’t solve for money up front. So we’ll see. This will be our first foray into it. We’ll make a few more mistakes I’m sure but hopefully we learn quickly.”

Right now Robot remains 100 percent committed to Orcs Must Die! and the studio is bringing the game to PS4 later this year, but that doesn’t mean it expects to be pigeonholed with that one franchise. Hudson said that Robot continues to brainstorm new IP ideas, but nothing has made it too far along in development to warrant a release. “We’ll definitely do a new IP again. We started a couple of prototypes in the past few years that haven’t panned out. It happens all the time, right?” he said, adding that the company also remains interested in mobile but is “very cautious.”

“I think what’s interesting about mobile over the last couple of years is how non-dynamic the market is as far as the top games. The games that have lived in the top charts have been there now for 2 or 3 years. They get there and they stay there and they’re really good at staying there and it’s hard to break in and become the new thing. There are some good case studies for that. Certainly not nearly as many as there are on PC,” he said.

Hudson on VR

Likewise, virtual reality, although enticing, is just too risky for a studio like Robot, Hudson noted.

“It comes back to a company our size and where we sit. For us to overinvest in a market where it’s hard to know what the growth curve is going to be would be pretty risky at our size. We can’t afford to be wrong on something this new and this different… We love the options it provides for new and compelling experiences in games. We’ve brainstormed plenty of ideas for Orcs Must Die! in VR and we’ve got some pretty good ones, but it’ll be a while before we seriously invest in it,” he said.

Hudson joked that Robot is “living vicariously” though a couple of ex-Ensemble studios in Dallas that are working on VR now.

A conservative and cautious approach is probably one of the reasons Robot has managed to survive in an increasingly challenging environment. Even for eSports – an area of the industry that Orcs Must Die! clearly could excel in – Hudson isn’t jumping in headfirst.

That being said, Hudson is definitely optimistic about eSports as a sector. “I think it’s going to become an increasingly large aspect of the industry. And there will be the games that work and the games that don’t work for it. There will be a lot of companies chasing it and probably crash on the rocks trying to get there, but it’s going to continue to grow. I think you’ll see it across platforms too. I think you’ll continue to see eSports be popular in mobile. It’ll continue to grow there. You think of it as a PC thing now but it’s not. I think it’s going to encompass all aspects of games,” he said.

 

Courtesy-GI.biz

 

Is AMD’s Project Polaris Plan Starting To Make Sense?

May 19, 2016 by Michael  
Filed under Computing

AMD’s Polaris strategy is becoming a bit clearer and even if we thought that the fabless chipmaker might have dropped the ball a bit, it’s cunning plan is starting to make sense.

Last week we saw Nvidia showing off its next-generation flagship GPU the GTX 1080 and the GTX 1070. The Green Goblin told us shedloads things which if true would clean AMD’s clock in terms performance.

It threw AMD’s decision to focus on the mainstream desktop and notebook markets with upcoming GCN (Graphics CoreNext) 4.0 GPUs, codenamed Polaris 10 and 11 into question.

Normally GPU manufacturers release the flagship or ‘high-end’ products first to get all the attention and then release the mid-range chips for the great unwashed a lot later once they have sorted out yields.

But AMD’s cunning plan suggests that it is going to do the opposite. It is risky, but it could mean that the outfit could make more money quickly. This is because mainstream GPUs account for the majority of GPU sales.

Sure the high-end, flagship level graphics cards carry the largest profit margins, mainstream and performance segment GPUs account for the vast majority of total graphics card sales. But it is not going to sort out AMD’s market share and profit woes.
AMD’s discrete GPU sales increased by 6.69 per cent in Q4 of 2015, which coincides with its release of the performance-segment R9 380X graphics card. Meanwhile Nvidia’s desktop discrete GPU shipments were down by 7.56 per cent from when it released its mainstream GTX 950.

Sure this is small potatoes, but it means that AMD could take roughly 7 per cent of Nvidia’s sales in a single quarter, by releasing a graphics card in a price segment that Nvidia had nothing.

Now Nvidia is going to be focusing on the high-end first and will not release anything for the performance for the mid-range for ages. But AMD will have its Polaris there and ready. In fact it will be about six months ahead of Nvidia which is more than enough time to drain a bit of the Green Goblin’s market share.

Then when AMD releases its flagship graphics card based on the HBM2 powered Vega 10 GPU, possibly as early as October 2016, it will arrive with a spec which is better than the GTX 1080 and is meant to go toe-to-toe with a possible GTX 1080 Ti or Titan X successor.

The plan requires nerves of steel, particularly as AMD’s bottom line is absolute pants at the moment, but it does make sense. However it is not good news for consumers. AMD is deliberately avoiding competition with its plan and this means that it can afford to charge a bit more until Nvidia pulls finger. Good for AMD but means that prices will be higher because AMD does not have to undercut Nvidia.

Courtesy-Fud

 

Could VR Be Used In The Future To Cure Paranoia?

May 10, 2016 by Michael  
Filed under Computing

Researchers at Oxford University think that virtual reality could soon be being used to treat psychological disorders such as paranoia.

In the British Journal of Psychiatry, which we get for the horoscope, the researchers explained who they stuck paranoid people into virtual social situations. Through interacting with the VR experience, subjects were able to safely experience situations that might otherwise have made them anxious.  We would have thought that paranoid people would not even have put on the glasses, but apparently they did.

By the end of the day more than half of the 30 participants no longer suffered from severe paranoia. This positive impact carried through into real world situations, such as visiting a local shop.

Paranoia causes acute anxiety in social situations – after all they believe that everyone is out to get them.  About two percent of the population suffer from paranoia which is sometimes connected to schizophrenia.

Treatment methods for anxiety often involve slowly introducing the source of anxiety in a way that allows the patient to learn that this event is safe rather than dangerous. The VR experiment, used a train ride and a lift scene taught subjects to relearn that they were really safe.

The VR simulation did not use very photo-realistic graphics, which raises another question about if realism is important to have a positive impact.

Courtesy-Fud

 

Sony Patents Eyeball Camera

May 4, 2016 by Michael  
Filed under Around The Net

Recently, Sony Computer Entertainment filed a patent with the USPTO to integrate a camera into a wearer’s contact lens, complete with the imaging sensor as well as data storage and a wireless communication module. The technology, powered wirelessly and controlled by blinking, also offers the possibility of auto-focus, zooming and image stabilization.

Sony is the second to file a patent for integrating a wearable camera into a contact lens, after it was discovered that Samsung filed a patent in South Korea for a similar concept on April 5th. Sony’s patent is filed under the name “Contact Lens and Storage Medium” and is slated to become a full-fledged camera device, complete with a lens, main CPU, imaging sensor, storage area, and a wireless communication module. The camera unit also includes support for autofocus, zooming, and image stabilization.

This isn’t the first time we’ve seen wireless sensor technology integrated into a contact lens. In January 2014, Google announced its ambitions to create a glucose-level monitoring contact lens for the diagnosis and monitoring of blood sugar levels for diabetic patients. Google’s project integrates several miniscule sensors loaded with tens of thousands of transistors that measure glucose levels from a wearer’s tear drops, along with a low-power wireless transmitter to send results to other wearable devices along with smartphones and PCs.

More recently on April 7, it was discovered that Samsung could be working on mass-marketing a CMOS imaging sensor into a contact lens thanks to a new patent discovered by SamMobile and GalaxyClub.nl. The patent application, filed in South Korea, includes a display that projects images directly into a wearer’s field of view and includes a camera, an antenna, and several sensors for detecting movement and eye blinks.

Sony’s contact lens patent could be successor to its HMZ 3D displays

Rather than placing focus solely as a healthcare solution, Sony’s patent appears to become a more biologically integrated implementation of the company’s early head-mounted displays (HMDs) with wireless video streaming. The big difference this time, however, will be the inclusion of a camera lens and near-undetectable appearance, depending on how well Sony manages to camoflauge any chips and modules into its first-generation contact lens units.

In November  2011, Sony introduced its first-generation HMZ-T1 head mounted 3D display, complete with dual 1280x720p OLED displays, support for 5.1 channel surround via earbuds and signal input from an HDMI 1.4a cable. This model weighed 420g / 0.93lbs with a launch price of $799.

In October 2012, Sony introduced the second-generation HMZ-T2 follow up in Japan. This model reduced weight by nearly 20 percent (330g / 0.73lbs) and replaced earbuds with a dedicated 3.5mm headphone jack, complete with near-latency free wireless HD viewing (dual 1280x720p displays), 24p cinema picture support, and signal input via HDMI 1.4a cable.

In November 2013, Sony introduced the HMZ-T3W, the third-generation of its head mounted 3D viewer with near-latency free, wireless HD viewing (dual 1280x720p displays) with a 32-bit DAC delivering 7.1 channel audio (5Hz – 24KHz), and signal input via MHL cable and HDMI 1.4a. This device was not available in the United States and launched in Europe for a stunning £1,300 ($2,035) and is alternatively available as an import from Japan for $1090.

Will not come cheap

Based on the initial launch prices of Sony’s previous HMZ headsets ($799 and above) and the Google Glass launch price of $1499, and depending on the company’s target market, we might expect Sony’s first-generation contact lenses to be somewhere in between these two price points when they begin mass-production within the next couple years.

Courtesy-Fud

 

Does Acer Support Virtual Reality?

April 28, 2016 by Michael  
Filed under Computing

Acer’s boss Jason Chen says his company will not make its own VR devices and will focus on getting its gaming products to work with the existing VR platforms.

Eyebrows were raised when Acer released its new Predator series products which support  virtual reality devices. The thought was that Acer might have a device of its own in the works. However Acer CEO Jason Chen said there were no plans and the goal was to get everythink working with the four current major VR platforms Oculus, HTC’s Vive, OSVR and StarVR.

He said that VR was still at a rather early stage and so far still has not yet had any killer apps or software. Although that never stopped the development of tablet which to this day has not got itself a killer app. But Chen said that its demand for high-performance hardware will be a good opportunity for Acer.

Acer is planning to add support for VR devices into all of its future Predator series products and some of its high-end PC products.

Chen told Digitimes that said Acer was investing in two robot projects, the home-care Jibo and the robot arm Kubi in the US, and the company internally has also been developing robot technologies and should achieve some results within two years. Acer’s robot products will target mainly the enterprise market.

Courtesy-Fud

 

Will The VR Industry Have Blockbuster Sales This Year?

April 27, 2016 by Michael  
Filed under Around The Net

Virtual reality is, without a doubt, the most exciting thing that’s going to happen to videogames in 2016 – but it’s becoming increasingly clear, in the cold light of day, that it’s only going to be providing thrills to a relatively limited number of consumers. Market research firm Superdata has downgraded its forecast for the size of the VR market this year once more, taking it from a dizzying $5.1 billion projection at the start of the year to a more reasonable sounding $2.9 billion; though I’d argue that even this figure is optimistic, assuming as it does supply-constrained purchases of 7.2 million VR headsets by American consumers alone in 2016.

Yes, supply-constrained; Superdata reckons that some 13 million Americans will want a VR headset this year, but only 7.2 million will ship, of which half will be Samsung’s Gear VR – which is an interesting gadget in some regards, but I can’t help but feel that its toy-like nature and the low-powered hardware which drives it isn’t quite what most proponents of VR have in mind for their revolution. Perhaps the limited selection of content consumers can access on Gear VR will whet their appetite for the real thing; pessimistically, though, there’s also every chance that it will queer the pitch entirely, with 3.5 million low-powered VR gadgets being a pretty likely source of negative word of mouth regarding nausea or headaches, for example.

This is a problem VR needs to tackle; for a great many consumers, without proactive moves from the industry, word of mouth is all they’re going to get regarding VR. It’s a transformative technology, when the experience is good – as it generally is on PSVR, Rift and Vive – but it’s not one you can explain easily in a video, or on a billboard, because the whole point is that it’s a new way of seeing 3D worlds that isn’t possible on existing screens. Worse, when you see someone else using a VR headset in a video or in real life, it just looks weird and a bit silly. The technology only starts to shine for most consumers when they either experience it, or speak to a friend evangelising it on the basis of their own experience; either way, it all comes down to experience.

That’s why it was interesting to hear GameStop talk up its role as a place where consumers can come and try out PlayStation VR headsets this year. That’s precisely what the technology needs; where at the moment, there are a handful of places you can go to try out VR, but it’s utterly insufficient. VR’s objective for 2016 isn’t just to get into the hands of a few million consumers – it’s to become desired, deeply desired, by tens of millions more. The only way that will happen is to create that army of evangelists by creating a large number of easily accessible opportunities to experience VR – and GameStop is right to position itself as the industry’s best chance of doing so in the USA. Pop-up VR booths in trendy spots might excite bloggers, but what this new sector needs in the latter half of 2016 is much more down to earth – it needs as many of America’s malls as possible to be places where shoppers can drop in and try out VR for themselves.

In a sense, what’s happening here is deeply ironic; after years of digital distribution and online shopping making retail all but irrelevant, to the point where it’s practically disappeared in some countries, the industry suddenly needs retail stores again – not to sell games, because those are, in truth, better sold online, but to sell hardware, to sell an experience. How exactly you structure a long-term business model around that – the games retailer as showroom – is something I’m honestly not sure about, but it’s something GameStop and its industry partners need to figure out, because what VR makes clear is that games do sometimes need a way to reach consumers physically, in the real world, and right now only games retail chains are positioned to do that.

This isn’t a one-time thing, either – we know that, because this has happened before, in the not-so-distant past. Nintendo’s Wii enjoyed an extraordinary sales trajectory from its first Christmas post-launch into its first full year on the market, not least because the company did a good job of putting demo units (mostly running Wii Sports, of course) into not only every games store in the world, but also into countless other popular shopping areas. It was nigh-on impossible, in the early months of the Wii, to go out shopping without encountering the brand, seeing people playing the games and having the opportunity to do so yourself – an enormously important thing for a device which, like VR, really needed to be experienced in person for its worth to become apparent. VR, if anything, magnifies that problem; at least with Wii Sports, observers could see people having fun with it. Observing someone using VR, as mentioned above, just looks daft and a bit uncomfortable.

GameStop has weathered the storm rather better than some of its peers in other countries. The United Kingdom has seen its games retail devastated; it’s all but impossible to actually walk into a specialist store and buy a game in many UK city centres, including London. Would a modern-day version of the Wii be able to thrive in an environment lacking these ready-made showrooms for its capabilities on every high street and in every shopping mall? Perhaps, but it would take enormous effort and investment; something that VR firms, especially Sony, are going to have to take very seriously as they plan how to get the broader public interested in their device, and how to break out beyond the early adopter market.

Much of the VR industry’s performance in 2016 is going to be measured in raw sales figures, which is a bit of a shame; Vive and Rift are enormously supply constrained and having fulfillment difficulties, and the numbers we’ve seen floating around for Sony’s intentions suggest that PSVR will also be supply constrained through Christmas. The VR industry – ignoring the slightly worrying, premature offshoot that is mobile VR – is going to sell every headset it can manufacture in 2016. If it doesn’t, then there’s a very serious problem, but every indication says that this year’s key limiter will be supply, not demand.

The real measurement of how VR has performed in 2016, then, should be something else – the purchasing intent and interest level of the rest of the population. If by the time the world is mumbling through the second line of Auld Lang Syne and welcoming in 2017, consumer awareness of VR is low and purchasing intent isn’t skyrocketing – or worse, if the media’s dominant narratives about the technology are all about vomiting and migraines – then the industry will have done itself a grievous disservice. This is the year of VR, but not for the vast majority of consumers – which means that the real task of VR firms in 2016 is to convince the world that a VR headset is something it simply must own in 2017.

Courtesy-GI.biz

 

Will AMD’s Polaris GPU Be A Success?

April 21, 2016 by Michael  
Filed under Computing

AMD is rumoured to have won some key contracts for its forthcoming Polaris GPU.

According to TechPowerUp AMD’s Polaris will go under the bonnet of Apple Mac desktops and laptops and will supply a Polaris GPU with 2,304 stream processors to Sony for the PlayStation 4.5 /PS4K.

On the Apple side the rumour says that both of its upcoming Radeon 400 series 14nm FinFET graphics chips, Polaris 10 and Polaris 11″ provide iMacs and MacBooks with energy efficient graphics acceleration.

There is no indication when the deal will go through. People have been waiting a long time for Apple to upgrade the Macs so a refresh could be due soon. Some think it could be in the second half of this year, soon after Polaris is officially announced.

It looks like the chips will be seen in the PlayStation 4.5 or 4k. The new SoC behind the PlayStation 4K, upgraded for 4K and VR gaming, will feature an 8-core 64-bit Jaguar x86 CPU running at 2.1GHz paired with a GPU with 2,304 stream processors and 36 next-gen GCN compute units.
This sounds similar to the specs of the Polaris 10 ‘Ellesmere’ chip in its Radeon R9 480 configuration.”

The stream processor count will be double that of the current PS4. It will have a 256-bit GDDR5 memory interface with 8GB of memory increasing system memory bandwidth from 176GB/s to 218GB/s.

Courtesy-Fud