SOA Software has launched an application programming interface (API) gateway today that allows businesses to expose their API’s with a built-in cloud based developer community, helping to grow their services and make it quicker for them to get up and running.
The firm’s CTO Alistair Farquharson said the API Gateway is unique due to it being a new concept in API and SOA management, aiming to “deliver new advantages in the application-level security space”.
“The new API Gateway provides monitory, security, and more uniquely, a developer community as well, so kind of a turnkey approach to an API gateway where a customer can buy that product, get it up and running, expose their API and expose the developer community to the outside world,” Farquharson said.
“[It will] support and manage the porting of mobile applications or web apps or B2B partnerships.”
Farquharson explained that there are three main components within the Gateway, which SOA Software has termed a “unified services gateway”, including a runtime component, a policy manager, and a developer community.
The runtime component handles the message traffic, whereas the policy manager component is capable of managing a range of different policies, such as threat protection, authentication, authorisation, anti-virus, monitorin, auditing, logging, for example.
“The whole objective here is to get a customer up and running with API’s as quickly as possible to meet some kind of a business need that they have, whether that’s mobile an application initiative or a web application, integration or syndication,” Farquharson added.
The third component is the API’s cloud-based “developer community”, which exposes an organisation to the outside world so developers can come take a look at its API, read its documentation, and see what APIs it has to figure out how to interact with them.
It’s this component that sets SOA Software’s Gateway apart form other firms doing similar appliances on the market, claims Farquharson.
“It essentially becomes the developer site for your organisation, with it all running on a single appliance which is rather unique,” he added.
“The interesting thing about the gateway is that it does API’s as well as services [that are] needed for mobile devices so you have old and the new encapsulated in the single appliance, which is very important to our customers.”
The developer community is offered through the API as a service, “like the Salesforce of APIs”, Farquharson said.
“Developers can go there and build their community and it provides them with high level service and availability and saglobla infrastructure and leverage the strength of their community to get themselves going.”
Some well-known industry analysts are suggesting that Microsoft could be behind as much as six months on software development for the Xbox Next. According to these sources, a combination of events have put Microsoft in this position, but it seems that some titles that were being developed internally have been canned. The situation led to Microsoft seeking to secure exclusives from 3rd party sources to fill in the gaps.
We first suggested a link between EA and Microsoft on some sort of an exclusive deal back when they were not a part of the Sony press conference earlier this year. Now, we find that they have a deal of some sort for the new Respawn title, which will apparently be exclusive to the Xbox 360 and Xbox Next. That’s not all, as it is expected that Microsoft has more exclusives to announce. What the question is really about is whether these are true exclusives or are just timed exclusives that we will see on the PS3/PS4 at some point in the future.
Even if Microsoft’s internal exclusives lack for the Xbox Next at launch, we expect them to catch up; we don’t see a big gap developing, but we know that Microsoft has solid properties to use on the Xbox Next and they will get those titles developed and out. No worries: it is going to be similar to all console launches where the software lacks when the system is released.
AMD has said the memory architecture in its heterogeneous system architecture (HSA) will move management of CPU and GPU memory coherency from the developer’s hands down to the hardware.
While AMD has been churning out accelerated processing units (APUs) for the best part of two years now, the firm’s HSA is the technology that will really enable developers to make use of the GPU. The firm revealed some details of the memory architecture that will form one of the key parts of HSA and said that data coherency will be handled by the hardware rather than software developers.
AMD’s HSA chips, the first of which will be Kaveri, will allow both the CPU and GPU to access system memory directly. The firm said that this will eliminate the need to copy data to the GPU, an operation that adds significant latency and can wipe out any gains in performance from GPU parallel processing.
According to AMD, the memory architecture that it calls HUMA – heterogeneous unified memory access, a play on unified memory access – will handle concurrency between the CPU and GPU at the silicon level. AMD corporate fellow Phil Rogers said that developers should not have to worry about whether the CPU or GPU is accessing a particular memory address, and similarly he claimed that operating system vendors prefer that memory concurrency be handled at the silicon level.
Rogers also talked up the ability of the GPU to take page faults and that HUMA will allow GPUs to use memory pointers, in the same way that CPUs dereference pointers to access memory. He said that the CPU will be able to pass a memory pointer to the GPU, in the same way that a programmer may pass a pointer between threads running on a CPU.
AMD has said that its first HSA-compliant chip codenamed Kaveri will tip up later this year. While AMD’s decision to give GPUs access to DDR3 memory will mean lower bandwidth than GPGPU accelerators that make use of GDDR5 memory, the ability to address hundreds of gigabytes of RAM will interest a great many developers. AMD hopes that they will pick up the Kaveri chip to see just what is possible.
AMD claims that the delay in transitioning from 28nm to 20nm highlights the beginning of the end for Moore’s Law.
AMD was one of the first consumer semiconductor vendors to make use of TSMC’s 28nm process node with its Radeon HD 7000 series graphics cards, but like every chip vendor it is looking to future process nodes to help it increase performance. The firm told said the time taken to transition to 20nm signals the beginning of the end for Moore’s Law.
Famed Intel co-founder and electronics engineer Gordon Moore predicted that total the number of transistors would double every two years. He also predicted that the ‘law’ would not continue to apply for as long as it has. It was professor Carver Mead at Caltech that coined the term Moore’s Law, and now one of Mead’s students, John Gustafson, chief graphics product architect at AMD, has said that Moore’s Law is ending because it actually refers to a doubling of transistors that are economically viable to produce.
Gustafson said, “You can see how Moore’s Law is slowing down. The original statement of Moore’s Law is the number of transistors that is more economical to produce will double every two years. It has become warped into all these other forms but that is what he originally said.”
According to Gustafson, the transistor density afforded by a process node defines the chip’s economic viability. He said, “We [AMD] want to also look for the sweet spot, because if you print too few transistors your chip will cost too much per transistor and if you put too many it will cost too much per transistor. We’ve been waiting for that transistion from 28nm to 20nm to happen and it’s taking longer than Moore’s Law would have predicted.”
Gustafson was pretty clear in his view of transistor density, saying, “I’m saying you are seeing the beginning of the end of Moore’s law.”
AMD isn’t the only chip vendor looking to move to smaller process nodes and has to wait on TSMC and Globalfoundries before it can make the move. Even Intel, with its three year process node advantage over the industry is having problems justifying the cost of its manufacturing business to investors, so it could be the economics rather than the engineering that puts an end to Moore’s Law.
ARM has been pushing its Mali GPU architecture both as a graphics engine and as a GPGPU accelerator through its support of OpenCL. The firm said that not only could chip vendors see improved performance by offloading onto the GPU but they could also cut costs by moving tasks from dedicated silicon to the GPU.
Ian Smythe, director of marketing at ARM also said that firms could move tasks such as image stabilization over to the GPU, which is already on the system on chip (SoC), and cut back on other parts of the chip.
Smythe said, “You might be able to save some cost somewhere in the SoC by cutting out a bit of hardware that you had and run it on the GPU instead. So cost reduction and an improved capability. So maybe they will cut out some of the ISP and they will do it on the GPU because the silicon is already there, it’s power efficient, it’s a quicker way of doing [it], you get a cost reduction, and performance goes up.”
Aside from lower cost chips, Smythe’s comments suggest that chip vendors and in particular vendors such as Samsung, which also develops software for its Android smartphones, will make use of GPGPUs as a way of cutting bill of materials costs and reducing chip complexity.
Smythe added that GPGPU computing doesn’t have a single ‘killer application’ but rather could be applied to use cases such as computational photography, as seen in the HTC One and gaming. However given Smythe’s comments, perhaps the real killer application for hardware vendors will come from lower material costs.
Richland is set to replace AMD’s Virgo platform, powered by Trinity processors, and this change will happen in June 2013, most likely coinciding with Computex 2013.
AMD has just launched the first batch of Richland mobile APUs and we still have to see some notebook designs hitting the market. We wrote about mobile Richland APUs.
As of late last year Desktop Richland was always set to launch in June 2013 and the fastest of them is the A10 6800K, clocked at 4.1GHz and 4.4 with Turbo. It also features Radeon HD 8670D graphics that run at 844 MHz. This is the fastest Richland part and it comes unlocked, ready to replace the current AMD A10 5800K. In Europe, the A10 5800K currently sells for 112, while in US the same CPU sells for $129.00 (boxed).
The alpha dog A10 6800K is followed by A10 6700, A8 6600K (Unlocked) and A8 6500. AMD has a mix of 100W and 65W quad-core Richland desktop SKUs. There will be a single A6 6400K (Unlocked) SKU and the A4 6300, both dual-cores with 65W TDP.
Production ready samples were churned out in late January, while volume production is scheduled for late March 2013. The announcement was always scheduled for June 2013 and Richland last through most of 2013, until Kaveri with 28nm Steamroller comes on line.
There have been more than enough leaks dealing with Richland, AMD’s successor to the Trinity powered Virgo platform, and we even had a chance to see some leaks regarding its successor, codenamed Kaveri. As you may already know, Richland is planned to last through 2013 and it is clear that this is very important chip for AMD.
Based on the Piledriver architecture and built using 32nm technology, Richland will feature an integrated GPU that will be upgraded to Radeon HD 8000 series, a generation ahead of Trinity. As you know, there has been a lot of leaks regarding the Richland parts and the quad-core A10-6800K with Radeon HD 8670D graphics is expected to pack quite a punch. Best of all, Richland will still use the same FM2 socket.
According to our sources, the NDA will be lifted on 12th of March, 8am EST, and we are sure that we will see at least a couple of reviews as well as some additional info regarding the price and the availability date.
According to Humans Invent the breakthough allows quantum computers to exchange data at the speed of light along optical fibres. Lead researcher on the project Tracy Northup said that the method allows the mapping of quantum information faithfully from an ion onto a photon.
Northup’s team used an “ion trap” to produce a single photon from a trapped calcium ion with its quantum state intact using mirrors and lasers. No potential cats were injured in the experiment. The move enables boffins to start to play with thousands of quantum bits rather than just a dozen or so. This means that they can get a computer to do specific tasks like factoring large numbers or a database search, faster.
Following the recent tease, AMD has detailed its new TressFX technology aimed to create much more realistic hair in games. The new technology will debut in Tomb Raider 2013, making the hair of Lara Croft impressively more realistic.
According to the AMD blog that explains TressFX, it uses DirectCompute to unlock the processing capabilities of GCN architecture and is based on AMD’s previous work on Order Independent Transparency (OIT), a method that makes use of Per-Pixel Linked-List (PPLL) data structures to manage rendering complexity and memory usage. The new TressFX for Hair has been developed in collaboration between AMD and Crystal Dynamic in order to bring quite an improvement to hair rendering and physics.
The blog post additionaly explains that DirectCompute is utilized to perform real-time physics simulations of TressFX Hair and treats each strand of hair as a chain with dozens of links thus allowing realistic influence of elements like gravity, wind and movement to Lara’s hair. Each strand is also given a collision detection which ensures that strands will not pass through each other or any other solid surface.
According to what we can see from the blog post, AMD recommends the GCN-based AMD Radeon HD 7000 series as a particularly well equipped graphics card series for this type of task but did not exclude other GPUs either.
In any case, the effects on the hair certainly look impressive and we surely look forward to see what else can AMD pull off with this same TressFX technology and how open-source will it actually be.
With the PlayStation 4 unveiled and rumors swirling that Microsoft is preparing to announce a new Xbox in April, next-gen is all the buzz right now. These are massive investments from the respective platform holders, and under the old “razor and blades” model the hope is to make back much of the money on software. And since some of that software is going to cost a good deal more to develop (although not as much as some think, says Hermen Hulst) should consumers be worried that $69.99 will become the new standard AAA game price?
The consensus seems to be that $59.99 should be able to hold, but some big budget titles like Call of Duty and others could get away with higher.
Sony Computer Entertainment America boss Jack Tretton told AllThingsD that PS4 will support a variety of prices from $0.99 to the $60 range (of course, “range” could imply $69.99). But the bottom line is that in this digital era, a variety of content will lead to all kinds of pricing. And as EEDAR’s Jesse Divnich pointed out to us, publishers can maintain the $59.99 price but bring in much more revenue with additional DLC.
“The $59.99 price point in the United States for next-generation games are unlikely to change. As we’ve seen through the years, however, revenue per game has increased gradually as publishers have been able to capitalize on additional in-game and digital content,” he said. “With publishers focusing on fewer, yet bigger and longer lasting titles, I’d expect publishers to keep the $59.99 price point intact, but expand on their digital offerings with more in-game content and expansion packs.”
He added, “And I don’t think this is a scenario where publishers ship a ‘base’ product and gauge on digital offerings. We believe these digital offerings, like they are today, will expand upon the player experience and offer even more value than they do today.”
David Cole of DFC Intelligence agrees. While he thinks the “super AAA” games may test out the $70 price, most content will come in much lower than that. “I think we will see an incredibly wide range of prices. Premium games command premium prices. Think Skylanders, Collector’s editions, Guitar Hero and Wii Fit in their day. What gets squeezed is the stuff in the middle that must compete with high-end development on one hand and low cost/low price games on the other,” he pointed out. “So you have fewer big budget titles but those will have even bigger budgets and that will be cost passed on to the consumer. Of course, very few games will be able to do this.”
Even if there is a slight bump on AAA game pricing, the average selling price (ASP) will beging coming down as the cycle advances, according to IDC Research manager Lewis Ward.
“While there will always be collector’s and limited edition console game discs that cost $80 or more, I’m not projecting that the PS4 or next-gen Xbox will raise the typical ‘AAA’ game disc to $70. 7th gen disc ASPs have trended down a couple dollars per year since 2006-2007. 8th generation discs will come in closer to $60 – which we’re already seeing with Wii U – and then start trending down a few dollars per year. So there will probably be a ~$10 gap in pricing between 7th and 8th gen discs, but due to ASP slippage over time, the overall console discs ASP through 2016 should remain in the low to mid-$40s range,” Ward explained.
An ASP in the mid-$40s is palatable for many hardcore gamers, but the console business is still going to have to face the fact that mobile, tablets, and free-to-play are changing the gaming landscape and the business of games. With PS4 supposedly being more open than any console before it, hopefully developers will being able to offer more free-to-play games and titles at more attractive prices.
AMD has released its Firepro R5000 graphics card that has video over IP capabilities.
AMD typically promotes its workstation class Firepro cards using CAD/CAM software, however this time the company is relying on remote viewing as the big selling point for its latest workstation graphics card. AMD’s Firepro R5000 has a GPU that uses its Graphics Core Next (GCN) architecture and Teradici PC video over IP technology to send graphics output over the network.
AMD used its Pitcarin GPU coupled to 2GB of GDDR5 memory in the Firepro R5000. However it isn’t AMD’s GPU that is the big selling point of the Firepro R5000 but rather Teradici’s Tera2240 chip that encrypts display output before sending it out on the network, while supporting up to 60fps (frames per second).
AMD’s Firepro R5000 is intended to be used in render farms, with each final image being sent over an IP network to the end host, and the firm claims that the technology can be used in education, financial and media environments.
The Firepro R5000 is a single slot graphics card that has two mini Displayport outputs that can drive two 2500×1600 displays, however it can also drive a further four remote displays at 1920×1200 resolution by sending data over its RJ45 Ethernet port.
Both AMD and Teradici talked up the low configuration overheads of the Firepro R5000.
Analyst at Jon Peddie Research have penned a report which claims that combined discrete and integrated GPU shipments in Q4 2012 dropped 8.2 per cent sequentially and 11.5 per cent on a year-to-year basis.
As you would expect they blame the popularity of tablets and the persistent recession, although quite how that works as the last I saw it was impossible to plug in a tablet instead of a graphics card. What is more likely it is the usual rubbish about how the world is buying tablets instead of PCs, rather than the more obvious fact that no one upgrades their PC in a recession. Anyway JPR has revised its compound annual growth rate for PC graphics from 2012 to 2016 to 3.2 per cent, with the total shipments of graphics chips in 2016 expected to be 549 million units.
Each of the major players will be upset with the results. Intel’s shipments dropped the least quarter-to-quarter at just 2.9 per cent, while AMD slipped 13.6 per cent and Nvidia was down 16.7 per cent. In terms of market share Intel held on to its dominant position and actually gained 3.4 per cent to 63.4 per cent at the expense of AMD dripped to 19.7 per cent and Nvidia fell to 16.9 per cent.
Jon Peddie notes that the numbers suggest more and more users are finding that embedded graphics, such as those found in Intel’s and AMD’s processors are “good enough.” This is indicative of the general falling standards in the world where rubbish is hyped to the skies at the expense of good stuff. We have dubbed this the “Apple Bieber” factor.
It looks like AMD has changed its mind about 8000 series graphics, part of the Solar System range of products are slowly showing up in the market.
We got an idea that at some point in the latter part of 2013 there will be some new Sea Islands based graphics line from AMD. What we refer to as the HD 8000 series will come at some point, but AMD can actually end up branding some of these new parts as HD 7000 series products as well.
We were more interested in the big picture, when and whether we will see an entirely new generation in 2014, let’s call it 9000 series for now. Multiple sources have told us that AMD will stick with discrete graphics for the foreseeable future. In 2014, we should see the new HSA generation as well as a steady roadmap for the future beyond 2014.
Remember, AMD still makes quite a bit of money on graphics. It doesn’t makes a lot, but it doesn’t build GPUs at a loss either. Its graphics integrated in CPUs, APUs if you will, also help AMD sell more cores and this is why AMD will stick with making new graphics cores in the future. Technology developed for high-end discrete graphics will trickle down to APUs over time.
Winning all three consoles, including the already launched Nintendo Wii U as well as the soon to launch Xbox 720 (next) and Playstation, 4 will definitely help AMD to perform even better in the future and build closer relations with developers.
Nvidia will have someone to fight in this market and AMD will continue to make discrete graphics cards, as well as notebook chips. Both companies will fight for as much market share as they possibly can and analysts who claim either of them is about to ditch the discrete market is dead wrong.
We don’t have the clocks yet, but we managed to get five names of AMD’s new desktop low power processors codenamed Kabini.
Kabini has two to four Jaguar cores, AMD Turbo Core overclocking, Radeon HD 8000 series graphics, DirectX 11.1 support, up to DDR3 1866 and comes in FT3 BGA package. The role of Kabini is to replace outdated Zacate E series of APUs.
The top of the line is X4 5110 that features HD 8310G graphics core and with its four cores it manages to stay under 25W TDP. The runner up is the X4 4410 that features the same HD 8310G graphics but much lower 15W TDP.
The third processor of this line is X2 3450 with HD 8280, slower graphics and most likely two CPU cores that operates under 15W TDP. The first three processors are starting production in the latter part of Q1 2013, but the official launch is expected in June, most likely at Computex in late May or early June 2013.
The last two Kabinis to launch are named E1 3310 and E1 2210. They look like the direct replacement for existing Brazos 2.0 E2 2000 and E2 1500 processors. The E1 3310 has HD 8240G graphics and 15W TDP.
The last part on the Kabini desktop APU list is the E1 2210 and we don’t even have the TDP details about this part.
The only additional thing we can tell about this CPU is that production ready samples are expected in March 2013 and shortly after we can expect the start of volume production. Both Kabini E series processors are expected to launch in June 2013.
While some people are packing in smoking, AMD thinks it is a good idea to pack in releasing new graphics cards, at least for a year or so.
AMD marketing manager Robert Hallock told Megagames that the company has no intentions to release Radeon HD 8000 series cards for the foreseeable future.
“The HD 7000 Series will remain our primary focus for quite some time,” he said.
When pressed he said that AMD and its partners are happy with the HD 7000 Series, and it will continue to be its emphasis in the channel for the foreseeable future. There had been some rumours that an HD 8000 Series was being developed. However AMD has never confirmed it and so far there has been no proof that any were ready to show up.
Hallock’s comments lend credibility to rumours that the HD 8000 family won’t be out before Q4 2013.