People working for the Chinese VR Zone have found evidence that Intel will only be launching two Broadwell desktop processors in Q2 2015.
The new Broadwell Desktop CPUs are based on the LGA1150 pin layout and will be compatible with the current Z97 motherboards.
ASUS and ASRock recently announced that their motherboards will be able to handle the new 14nm Broadwell processors with a BIOS update. The two new CPUs will be the Intel Core i7-5775C and Core i5-5675C.
There are some odd things on this list. It is not clear what the C stands for in the product names. Our local AMD fanboy says it stands for C*ap while others have suggested, camel or caramel, depending on how hungry they are. The processors are unlocked for overclocking like the previous K models were and it could the K has somehow become a C.
The new i7 has four cores and eight threads running at a base frequency of 3.3GHz and with a turbo to 3.7GHz while the i5 has a base speed of 3.1GHz and a Turbo of 3.6GHz on its four cores, four threads base. The i7 comes with 6MB cache while the i5 only has 4MB and both are powered by the Intel Iris Pro Graphics 6200 iGPU.
Intel has launched its latest campaign to get back into the mobile market, and this time it might just get away with it.
Intel’s involvement in mobile is a history of dropped balls and lost opportunities. In 1996, Intel supplied the processor for the Nokia Communicator that had early features of smartphones, it lost this to AMD. In 1999, it supplied the computer processor for the early BlackBerry, but sold the business to Marvell in 2005. In 2004, it supplied the brains for the Palm Treo 650, an early smartphone that was discontinued four years later. In 2006, it snubbed a request from Apple to make a processor for the iPhone.
Now CEO Brian Krzanich, is spending billions to gain a mobile foothold as it introduces new Atom microprocessors for smartphones and tablets.
In March, Intel announced a range of new products for mobile computing at the Mobile World Congress in Barcelona. In January, Intel combined its mobile and personal computing businesses into a single computing group.
It has formed alliances with two Chinese companies that make chips for mobile phones and consumer electronic products. And is spending big to get into the new Internet of Things market.
It’s new Core M range is also getting attention from mobile PC makers and companies which want to set up wireless offices.
All this is taking its toll. Last year, Intel posted a $4.2 billion loss in its mobile group by essentially subsidizing the purchase of its tablet chips by tablet makers. The company expects its mobile group to break even in 2016.
Bryant said this was a price that needed to be paid for sitting on the sidelines for a number of years and then fighting your way back into the market.
“We will improve this. We will not continue to accept a business with multibillion dollar losses, but this is the price you pay to get back in. We are getting back in.”
While it is easy to write off Intel in mobile, it is clear that there is a lot happening and Intel is prepared to spend money to get there. Already it is getting attention from the manufacturers, and maybe this time it will not drop any balls.
Pascal is Nvidia’s next generation architecture and it is coming after Maxwell of course. The company says it will launch next year, but details are still sketchy.
According Nvidia CEO Jen Hsun Huang, it is coming with Mixed Precision and this is the new architecture that will succeed Maxwell. Nvidia claims that the new GPU core has its own architectural benefits.
3D memory or High Bandwidth Memory (HBM), is a big thing and Jen Hsun Huang claims 32GB is possible with the new architecture, compared to 12GB on the new Maxwell-based Titan X. This is a staggering increase from the current standard of 4GB per card, to 12GB with Titan, and probably up to 32GB with Pascal. NV Link should enable a very fast interconnect that has 5 times the performance of PCI Express, which we all use right now. More memory and more bandwidth are obviously needed for 4K/UHD gaming.
Huang also shared some very rough estimates, including Convolution Compute performance, will be four times faster with FP16 precision in mixed precision mode. The 3D memory offers a six-fold increase in GPU to memory bandwidth.
Convolution and bandwidth at the front, and bandwidth to convolution at the back of the GPU, should get be five times faster than on Maxwell cards. It is complex fuzzy logic that is hard to explain with so few details shared by Nvidia about the Pascal architecture.
The width update interconnect with NV Link should get you a twofold performance increase and when you when you multiply these two numbers, Nvidia ends up with a comes to 10x compute performance increase compared to Maxwell, at least in what Nvidia CEO calls the “CEO bench”.
He warned the audience that this is a very rough estimate. This 10X number mainly targets deep learning, as it will be able to teach the deep learning network ten times faster. This doesn’t meant that the GPU offers 10 times the GPU performance for gaming compared to Maxwell, not even close, we predict.
Volta made it back to the roadmap and currently it looks like the new architecture will be introduced around 2018, or about three years from now.
Cadence Design and Intel announced that the two companies have hammered out a 14nm (nanometer) library characterization reference flow for customers of Intel Custom Foundry.
The library characterization reference flow is centered on the Cadence Virtuoso Liberate Characterization solution and Spectre Circuit Simulator and enables accurate 14nm logic libraries.
The reference flow for 14nm logic libraries enables the creation of Liberty libraries, AOCV de-rating tables, library validation and reliability views.
The reference flow was developed using Virtuoso Liberate, Virtuoso Liberate LV, Virtuoso Variety characterization solutions and Spectre Circuit Simulator to deliver accurate logic libraries, including advanced timing models (ECSM, CCS), advanced noise models (ECSMN, CCSN), and advanced power models (ECSMP, CCSP).
The reference flow enables Intel Custom Foundry customers to re-characterize the logic libraries for custom process, voltage or temperature corners or to characterize custom cells following a similar characterization methodology.
Intel Custom Foundry has an extensive design platform on Intel’s 14nm Tri-gate process technology for systems-on-chip (SoCs) targeted at cloud infrastructure and mobile applications.
Intel’s 14nm platform is the second generation to use 3D Tri-gate transistors that enable chips to operate at lower voltage with lower leakage, providing an unprecedented combination of improved performance and energy efficiency compared to previous state-of-the-art transistors.
Ali Farhang, vice president, Design and Ennoblement Services, Intel Custom Foundry said that accurate logic libraries were required to enable customers to implement and verify differentiated SoCs on Intel’s 14nm design platform.
“Intel Custom Foundry’s 14nm characterization reference flow includes best characterization practices jointly developed by Intel Custom Foundry and Cadence and can accelerate the ramp-up time for customers who need to re-characterize libraries,” he said.
TSMC is reportedly getting the majority of Apple A9 orders, which would be a big coup for the company.
An Asian brokerage firm released a research note, claiming that disputes over the number of Apple A9 orders from TSMC and Samsung are “coming to an end.”
The unnamed brokerage firm said TSMC will gain more orders due to its superior yield-ramp and “manufacturing excellence in mass-production.”
This is not all, as the firm also claims TSMC managed to land orders for all Apple A9X chipsets, which will power next generation iPads. With the A9X, TSMC is expected to supply about 70 percent of all Apple A9-series chips, reports Focus Taiwan.
While Samsung managed to beat other mobile chipmakers (and TSMC), and roll out the first SoC manufactured on a FinFET node, TSMC is still in the game. The company is already churning out 16nm Kirin 930 processors for Huawei, and it’s about to get a sizable chunk of Apple’s business.
TSMC should have no trouble securing more customers for its 16FF process, which will be supplemented by the superior 16FF+ process soon. In addition, TSMC is almost certain to get a lot of business from Nvidia and AMD once their FinFET GPUs are ready.
Intel has announced details of its first Xeon system on chip (SoC) which will become the new the Xeon D 1500 processor family.
Although it is being touted as a server, storage and compute applications chip at the “network edge”, word on the street is that it could be under the bonnet of robots during the next apocalypse.
The Xeon D SoCs use the more useful bits of the E3 and Atom SoCs along with 14nm Broadwell core architecture. The Xeon D chip is expected to bring 3.4x better performance per watt than previous Xeon chips.
Lisa Spelman, Intel’s general manager for the Data Centre Products Group, lifted the kimono on the eight-core 2GHz Xeon D 1540 and the four-core 2.2GHz Xeon D 1520, both running at 45W. It also features integrated I/O and networking to slot into microservers and appliances for networking and storage, the firm said.
The chips are also being touted for industrial automation and may see life powering robots on factory floors. Since simple robots can run on basic, low-power processors, there’s no reason why faster chips can’t be plugged into advanced robots for more complex tasks, according to Intel.
IBM has high hopes the upgraded model will generate solid sales based not only on usual customer patterns but its design focus aimed at helping them cope with expanding mobile usage, analysis of data, upgrading security and doing more “cloud” remote computing.
Mainframes are still a major part of the Systems and Technology Group at IBM, which overall contributed 10.8 percent of IBM’s total 2014 revenues of $92.8 billion. But the z Systems and their predecessors also generate revenue from software, leasing and maintenance and thus have a greater financial impact on IBM’s overall picture.
The new mainframe’s claim to fame is to use simultaneous multi-threading (SMT) to execute two instruction streams (or threads) on a processor core which delivers more throughput for Linux on z Systems and IBM z Integrated Information Processor (zIIP) eligible workloads.
There is also a single Instruction Multiple Data (SIMD), a vector processing model providing instruction level parallelism, to speed workloads such as analytics and mathematical modeling. All this means COBOL 5.2 and PL/I 4.5 exploit SIMD and improved floating point enhancements to deliver improved performance over and above that provided by the faster processor.
Its on chip cryptographic and compression coprocessors receive a performance boost improving both general processors and Integrated Facility for Linux (IFL) cryptographic performance and allowing compression of more data, helping tosave disk space and reducing data transfer time.
There is also a redesigned cache architecture, using eDRAM technology to provide twice as much second level cache and substantially more third and fourth level caches compared to the zEC12. Bigger and faster caches help to avoid untimely swaps and memory waits while maximisng the throughput of concurrent workload Tom McPherson, vice president of z System development, said that the new model was not just about microprocessors, though this model has many eight-core chips in it. Since everything has to be cooled by a combination of water and air, semiconductor scaling is slowing down, so “you have to get the value by optimizing.
The first real numbers on how the z13 is selling won’t be public until comments are made in IBM’s first-quarter report, due out in mid-April, when a little more than three weeks’ worth of billings will flow into it.
The company’s fiscal fortunes have sagged, with mixed reviews from both analysts and the blogosphere. Much of that revolves around IBM’s lag in cloud services. IBM is positioning the mainframe as a prime cloud server, one of the systems that is actually what cloud computing goes to and runs on.
One of the hottest things we learned at the Mobile World Congress is that MediaTek is working with AMD on mobile SoC graphics.
This is a big deal for both companies, as this means that AMD is getting back into the ultra-low power graphics market, while MediaTek might finally get faster graphics and gain more appeal in the high end segment. The choice of ARM Mali or Imaginations Technologies GPUs is available for anyone, but as most of you know Qualcomm has its own in-house Adreno graphics, while Nvidia uses ultra-low power Maxwell GPUs for its latest SoCs.
Since Nvidia exited the mobile phone business, it is now a two horse race between the ever dominant Qualcomm and fast growing MediaTek. The fact that MediaTek will get AMD graphics just adds fuel to the fire.
We have heard that key AMD graphics people are in continuous contact with MediaTek and that they have been working on an SoC graphics solution for a while.
MediaTek can definitely benefit from faster graphics, as the recently pictured tablet SoC MT8173 powered by two Cortex-A72 clocked up to 2.4GHz and two Cortex-A53 has PowerVR GX6250 graphics (two clusters). The most popular tablet chip Appel’s A8X has PowerVR Series 6XT GXA6850 (octa-core) which should end up significantly faster, but at the same time significantly more expensive.
MediaTek MT6795 a 28nm eight-core with a 2.2GHz clock and PowerVR G6200 GPU at 700 MHz, which is 100 MHz faster than one we tested on the Meizu MX4, which was one of the fastest SoCs until Qualcomm’s Snapdragon 810 came out in late February.
AMD and MediaTek declined to comment this upcoming partnership, but our industry sources know that they both have been working on new graphics for future chips that will be announced at a later date. It’s cool to see that AMD will return to this market, especially as the company sold of its Imageon graphics back in 2009 – for a lousy $65 million to Qualcomm. Imageon by ATI was the foundation for Adreno graphics.
We have been reassured some 18 months ago by some AMD senior graphics people, that “AMD didn’t forget how to make good ultra-low power graphics” and we guess that this cooperation proves that.
Intel CEO Brian Krzanich said at the Goldman Sachs Technology and Internet conference that the the new Core M chips are due in the second half of the year and will also extend battery life in tablets, hybrids, and laptop PCs.
The new chips will mean much thinner tablets and mobile PCs which will make Apple’s Air look decidedly portly. Intel’s Core M chips, introduced last year, are based on the Broadwell but the Skylake chips should also improve graphics and general application performance.
The Skylake chips will be able to run Windows 10, as well as Google’s Chrome and Android OSes, Krzanich said. But most existing Core M systems run Windows 8.1, and Intel has said device makers haven’t shown a lot of interest in other OSes. So most Skylake devices will probably run Windows 10. Chipzilla is expected to give more details about the new Core M chips in June at the Computex trade show in Taipei.
Skylake systems will also support the second generation of Intel’s RealSense 3D camera technology, which uses a depth sensor to create 3D scans of objects, and which can also be used for gesture and facial recognition. The hope is that the combination of Skylake and a new Windows operating system will give the PC industry a much needed boost.
In related news, Intel announced that socketed Broadwell processors will be available in time for Windows 10.
AMD Liquid VR is not a retail product – it is an initiative to develop and deliver the best Virtual Reality (VR) experience in the industry.
AMD Liquid VR was announced at the Game Developers Conference in San Francisco, and the company describes it is a “set of innovative technologies focused on enabling exceptional VR content development” for hardware based on AMD silicon.
Developers will soon get access to the LiquidVR SDK, which will help them address numerous issues associated with VR development.
Platform and software rather than hardware
If you were expecting to see a sexy AMD VR headset with a killer spec, the announcement may be disappointing. However, if you are a “what’s under the bonnet” kind of geek, there are a few interesting highlights.
AMD has put a lot of effort into minimising motion-to-photon latency, which should not only help improve the experience, but also keep you from experiencing motion sickness, or hurling over that new carpet that really ties the room together.
Headline features of LiquidVR SDK 1.0 include:
Async Shaders for smooth head-tracking enabling Hardware-Accelerated Time Warp, a technology that uses updated information on a user’s head position after a frame has been rendered and then warps the image to reflect the new viewpoint just before sending it to a VR headset, effectively minimizing latency between when a user turns their head and what appears on screen.
Affinity Multi-GPU for scalable rendering, a technology that allows multiple GPUs to work together to improve frame rates in VR applications by allowing them to assign work to run on specific GPUs. Each GPU renders the viewpoint from one eye, and then composites the outputs into a single stereo 3D image. With this technology, multi-GPU configurations become ideal for high performance VR rendering, delivering high frame rates for a smoother experience.
Latest data latch for smooth head-tracking, a programming mechanism that helps get head tracking data from the head-mounted display to the GPU as quickly as possible by binding data as close to real-time as possible, practically eliminating any API overhead and removing latency.
Direct-to-display for intuitively attaching VR headsets, to deliver a seamless plug-and-play virtual reality experience from an AMD Radeon™ graphics card to a connected VR headset, while enabling features such as booting directly to the display or using extended display features within Windows.
You can grab the full AMD LiquidVR presentation here. (pdf)
What’s next for LiquidVR?
It all depends on what you were expecting, and what the rest of the industry does. AMD hopes LiquidVR will be compatible with a broad range of VR devices. LiquidVR will allow hardware makers to implement AMD technology in their products with relative ease, enabling 100Hz refresh rates, the use of individual GPUs per each eye and so on.
To a certain extent, you can think of LiquidVR as FreeSync for VR kit.
Oculus CEO Brendan Irbe said achieving presence in a virtual world is one of the most important elements needed to deliver a good user experience.
He explained where AMD comes in:
“We’re excited to have AMD working with us on their part of the latency equation, introducing support for new features like asynchronous timewarp and late latching, and compatibility improvements that ensure that Oculus’ users have a great experience on AMD hardware.”
Raja Koduri, corporate vice president, Visual Computing, AMD, said content, comfort and compatibility are the cornerstones of AMD’s focus on VR.
AMD’s resident graphics guru said:
“With LiquidVR we’re collaborating with the ecosystem to unlock solutions to some of the toughest challenges in VR and giving the keys to developers of VR content so that they can bring exceptional new experiences to life.”
A picture is worth a thousand words, so here’s 3300 frames of AMD’s virtual reality vision.
It looks like the Mantle API developed by AMD is slowly reaching its end of its useful life.
Mantle has apparently served its purpose as a bridge between DirectX 11 and DirectX 12 and AMD is starting to tell new developers to focus their attention on DirectX and GLnext.
Raja Koduri, the Vice President of Visual and Perceptual Computing at AMD said in a blog post:
The Mantle SDK also remains available to partners who register in this co-development and evaluation program. However, if you are a developer interested in Mantle “1.0″ functionality, we suggest that you focus your attention on DirectX® 12 or GLnextGLnext.
This doesn’t mean a quick death for Mantle. AMD suggest it will support its partners and that there are still titles to come with support for Mantle. Battlefield Hardline is one of them and it’s a big one.
Back in November AMD announced a Mantle update, telling the world that there are four engines and 20+ launched or upcoming titles, and 10 developers publically announced their support for Mantle.
There are close to 100 registered developers in the Mantle beta program. The Frostbite 3 engine (Battlefield Hardline), CryEngine (Crysis series), Nitrous Engine (Star Citizen) and Asura Engine (Sniper elite) currently have support for Mantle. Some top games including Thief and Sid Meir’s Civilization Beyond Earth also support Mantle.
AMD will tell developers a bit more about Mantle at the Game Developers Conference 15 that starts today in San Francisco and will talk more about its definitions of an open platform. The company will also tackle on new capabilities beyond draw calls and it will remain there for the people who are already part of the Mantle program.
However, AMD suggests new partners should look the other way and focus on alternatives. When we spoke with Raja and a few other people from AMD over the last few quarters, we learned that Mantle was never supposed to take on DirectX 12. You should look at Mantle as AMD’s wish list, that’s what AMD wanted and needed before Microsoft was ready to introduce DirectX 12. Mantle as a low-level rending API and keep in in mind that it came almost two years before DirectX 12.
The Battlefield 4 Mantle patch came in February 2014 roughly a year ago and it showed a significant performance increase on supported hardware. Battlefield Hardline is the next big game to support Mantle and it comes in two weeks. CryEngine also supports Mantle, but we will have to wait and see if the support will ever translate into an actual game with Mantle support.
Spotted by GforGames site, in a GeekBench test results and running inside an unknown smartphone, MediaTek’s MT6795 managed to score 886 points in the single-core test and 4536 points in the multi-core test. These results were enough to put it neck to neck with the mighty Qualcomm Snapdragon 810 SoC tested in the LG G Flex 2, which scored 1144 points in the single-core and 4345 in the multi-core test. While it did outrun the MT6795 in the single-core test, the multi-core test was clearly not kind on the Snapdragon 810.
The unknown device was running on Android Lollipop OS and packed 3GB of RAM, which might gave the MT6795 an edge over the LG G Flex 2.
MediaTek’s octa-core MT6795 was announced last year and while we are yet to see some of the first design wins, recent rumors suggested that it could be powering Meizu’s MX5, HTC’s Desire A55 and some other high-end smartphones. The MediaTek MT6795 is a 64-bit octa-core SoC clocked at up to 2.2GHz, with four Cortex-A57 cores and four Cortex-A53 cores. It packs PowerVR G6200 graphics, supports LPDDR3 memory and can handle 2K displays at up to 120Hz.
As we are just a few days from Mobile World Congress (MWC) 2015 which will kick off in Barcelona on March 2nd, we are quite sure that we will see more info as well as more benchmarks as a single benchmark running on an unknown smartphone might not be the best representation of performance, it does show that MediaTek certainly has a good chip and can compete with Qualcomm and Samsung.
According to Toms Hardware one of the unexpected features of DirectX 12 is the ability to use Nvidia GPUs alongside AMD GPUs in multi-card configurations.
This is because DirectX 12 operates at a lower level than previous versions of the API it is able to treat all available video resources as one unit. Card model and brand makes no difference to a machine running DX12.
This could mean that the days of PC gamers having to decide between AMD or Nvidia could be over and they can pick their referred hardware from both companies and enjoy the best of both worlds. They will also be able to mix old and new cards.
However there might be a few problems with all this. Rather than worrying about your hardware optimization software developers will have to be on the ball to make sure their products work.
More hardware options means more potential configurations that games need to run on, and that could cause headaches for smaller studios.
It would appear that the world is rushing to Nvidia to buy its latest GPU at the expense of AMD.
According to the data, NVIDIA and AMD each took dramatic swings from Q4 of 2013 to Q4 of 2014 with Nvidia increasing its market share over AMD by 20 per cent and AMD’s market share has dropped from 35 per cent at the end of 2013 to just 24 per cent at the end of 2014.
Meanwhile, Nvidia has gonr from 64.9 per cent at the end of 2013 to 76 per cent at the end of 2014.
The report JPR’s AIB Report looks at computer add-in graphics boards, which carry discrete graphics for desktop PCs, workstations, servers, and other devices such as scientific instruments.
In all cases, AIBs represent the higher end of the graphics industry using discrete chips and private high-speed memory, as compared to the integrated GPUs in CPUs that share slower system memory.
On a year-to-year basis, total AIB shipments during the quarter fell by 17.52 per cent , which is more than desktop PCs, which fell by 0.72 percent .
However, in spite of the overall decline, somewhat due to tablets and embedded graphics, the PC gaming momentum continues to build and is the bright spot in the AIB market.
The overall PC desktop market increased quarter-to-quarter including double-attach-the adding of a second (or third) AIB to a system with integrated processor graphics-and to a lesser extent, dual AIBs in performance desktop machines using either AMD’s Crossfire or Nvidia’s SLI technology.
The attach rate of AIBs to desktop PCs declined from a high of 63 per cent in Q1 2008 to 36 per cent this quarter.
So in other words It is also clear that the Radeon R9 285 release didn’t have the impact AMD had hoped and NVIDIA’s Maxwell GPUs, the GeForce GTX 750 Ti, GTX 970 and GTX 980 have impacted the market even more than expected.
This is ironic because the GTX 970 has been getting a lot of negative press with the memory issue and AMD makes some good gear, has better pricing and a team of PR and marketing folks that are talented and aggressive.
Intel’s exascale computing efforts have received a boost with the extension of the company’s research collaboration with the Barcelona Supercomputing Center.
Begun in 2011 and now extended to September 2017, the Intel-BSC work is currently looking at scalability issues with parallel applications.
Karl Solchenbach, Intel’s director, Innovation Pathfinding Architecture Group in Europe said it was important to improve scalability of threaded applications on many core nodes through the OmpSs programming model.
The collaboration has developed a methodology to measure these effects separately. “An automatic tool not only provides a detailed analysis of performance inhibitors, but also it allows a projection to a higher number of nodes,” says Solchenbach.
BSC has been making HPC tools and given Intel an instrumentation package (Extrae), a performance data browser (Paraver), and a simulator (Dimemas) to play with.
Charlie Wuischpard, VP & GM High Performance Computing at Intel said that the Barcelona work is pretty big scale for Chipzilla.
“A major part of what we’re proposing going forward is work on many core architecture. Our roadmap is to continue to add more and more cores all the time.”
“Our Knights Landing product that is coming out will have 60 or more cores running at a slightly slower clock speed but give you vastly better performance,” he said.