Subscribe to:

Subscribe to :: TheGuruReview.net ::

Amazon’s Zocalo Goes Mobile

November 24, 2014 by Michael  
Filed under Around The Net

Amazon Web Services (AWS) has announced two much-needed boosts to its fledgling Zocalo productivity platform, making the service mobile and allowing for file capacities of up to 5TB.

The service, which is designed to do what Drive does for Google and what Office 365 does for software rental, has gained mobile apps for the first time as Zocalo appears on the Google Play store and Apple App Store.

Amazon also mentions availability on the Kindle store, but we’re not sure about that bit. We assume it means the Amazon App Store for Fire tablet users.

The AWS blog says that the apps allow the user to “work offline, make comments, and securely share documents while you are in the air or on the go.”

A second announcement brings Zocalo into line with the AWS S3 storage on which it is built. Users will receive an update to their Zocalo sync client which will enable file capacities up to 5TB, the same maximum allowed by the Amazon S3 cloud.

To facilitate this, multi-part uploads will allow users to carry on an upload from where it was after a break, deliberate or accidental.

Zocalo was launched in July as the fight for enterprise storage productivity hots up. The service can be trialled for 30 days free of charge, offering 200GB each for up to 50 users.

Rival services from companies including the aforementioned Microsoft and Google, as well as Dropbox and Box, coupled with aggressive price cuts across the sector, have led to burgeoning wars for the hearts and minds of IT managers as Microsoft’s Office monopoly begins to wane.

Courtesy-TheInq

NVidia Reveals The Tesla K80 GPU

November 20, 2014 by Michael  
Filed under Computing

nVidia has unveiled what it claims is the world’s highest-performing GPU accelerator designed for high performance computing (HPC) applications.

Launched as an addition to the Tesla Accelerated Computing Platform, the Tesla K80 dual GPU accelerator is the most powerful in Nvidia’s line-up and is aimed at accelerating a wide range of data analytics and scientific computing machine learning.

“It combines the world’s fastest GPU accelerators, the widely used CUDA parallel computing model, and a comprehensive ecosystem of software developers, software vendors, and data centre system OEMs,” said Nvidia.

The firm explained how the Tesla K80 delivers almost double the performance and double the memory bandwidth of its predecessor, the Tesla K40 GPU accelerator.

“With 10 times higher performance than today’s fastest CPU, it outperforms CPUs and competing accelerators on hundreds of complex analytics and large, computationally intensive scientific computing applications,” the firm added.

The accelerator boasts an enhanced version of Nvidia GPU Boost technology, which boosts applications by dynamically converting power headroom into the optimal performance enhancement for each individual application.

The GPU was designed to tackle “the most difficult computational challenges”, ranging from astrophysics and genomics to quantum chemistry and data analytics.

It is also optimised for deep learning tasks, a segment of the machine learning field which Nvidia says is the fastest growing.

Featuring two GPUs per board, the Tesla K80 dual-GPU accelerator doubles throughput of applications designed to take advantage of multiple GPUs.

It offers 24GB of ultra-fast GDDR5 memory (12GB of memory per GPU) as well as twice the memory of the Tesla K40 GPU, which allows the processing of double-sized datasets.

A 480GBps memory bandwidth offers an increased data throughput so that data scientists can crunch though petabytes of information in half the time compared with the Tesla K10 accelerator.

Nvidia said that the GPU’s 4,992 CUDA parallel processing cores also boost applications by up to 10 times compared with using a CPU alone.

“The Tesla K80 dual-GPU accelerators are up to 10 times faster than CPUs when enabling scientific breakthroughs in some of our key applications, and provide a low energy footprint,” said Wolfgang Nagel, director of the Centre for Information Services and HPC at Technische Universität Dresden in Germany.

“Our researchers use the available GPU resources on the Taurus supercomputer extensively to enable a more refined cancer therapy, understand cells by watching them live, and study asteroids as part of ESA’s Rosetta mission.”

The Tesla K80 dual-GPU accelerator starts shipping today from server manufacturers including Asus, Bull, Cirrascale, Cray, Dell, Gigabyte, HP, Inspur, Penguin, Quanta, Sugon, Supermicro and Tyan.

The Tesla K80 dual-GPU accelerator can also be tried for free on remotely hosted clusters.

Courtesy-TheInq

Should AMD And nVidia Get The Blame For Assassin’s Creed’s PC Issues?

November 19, 2014 by Michael  
Filed under Gaming

Ubisoft is claiming that the reason that its latest Assassin’s Creed game was so bad was because of AMD and Nvidia configurations. Last week the Ubisoft was panned for releasing a game which was clearly not ready and Ubisoft originally blamed AMD for its faulty game. Now Ubisoft has amended an original forum post to include and acknowledge problems on Nvidia hardware as well.

Originally the post read “We are aware that the graphics performance of Assassin’s Creed Unity on PC may be adversely affected by certain AMD CPU and GPU configurations. This should not affect the vast majority of PC players, but rest assured that AMD and Ubisoft are continuing to work together closely to resolve the issue, and will provide more information as soon as it is available.”

However there is no equivalent Nvidia-centric post on the main forum, and no mention of the fact that if you own any Nvidia card which is not a GTX 970 or 980. What is amazing is that with the problems so widespread, Ubisoft did not see them in its own testing before sending it out to the shops. Unless they only played the game on an Nvidia GTX 970 and did not bother to test it on a console, it is inconceivable that they could not have seen it.

Courtesy-Fud

Amazon Goes With Intel Zeon Inside

November 18, 2014 by Michael  
Filed under Computing

Amazon has become the latest vendor to commission a customized Xeon chip from Intel to meet its exact compute requirements, in this case powering new high-performance C4 virtual machine instances on the AWS cloud computing platform.

Amazon announced at the firm’s AWS re:Invent conference in Las Vegas that the latest generation of compute-optimized Amazon Elastic Compute Cloud (EC2) virtual machine instances offer up to 36 virtual CPUs and 60GB of memory.

“These instances are designed to deliver the highest level of processor performance on EC2. If you’ve got the workload, we’ve got the instance,” said AWS chief evangelist Jeff Barr, detailing the new instances on the AWS blog.

The instances are powered by a custom version of Intel’s latest Xeon E5 v3 processor family, identified by Amazon as the Xeon E5-2666 v3. This runs at a base speed of 2.9GHz, and can achieve clock speeds as high as 3.5GHz with Turbo boost.

Amazon is not the first company to commission a customized processor from Intel. Earlier this year, Oracle unveiled new Sun Server X4-4 and Sun Server X4-8 systems with a custom Xeon E7 v2 processor.

The processor is capable of dynamically switching core count, clock frequency and power consumption without the need for a system level reboot, in order to deliver an elastic compute capability that adapts to the demands of the workload.

However, these are just the vendors that have gone public; Intel claims it is delivering over 35 customized versions of the Intel Xeon E5 v3 processor family to various customers.

This is an area the chipmaker seems to be keen on pursuing, especially with companies like cloud service providers that purchase a great many chips.

“We’re really excited to be working with Amazon. Amazon’s platform is the landing zone for a lot of new software development and it’s really exciting to partner with those guys on a SKU that really meets their needs,” said Dave Hill, ‎senior systems engineer in Intel’s Datacenter Group.

Also at AWS re:Invent, Amazon announced the Amazon EC2 Container Service, adding support for Docker on its cloud platform.

Currently available as a preview, the EC2 Container Service is designed to make it easy to run and manage distributed applications on AWS using containers.

Customers will be able to start, stop and manage thousands of containers in seconds, scaling from one container to hundreds of thousands across a managed cluster of Amazon EC2 instances, the firm said.

Courtesy-TheInq

Is nVidia Winning The GPU War?

November 17, 2014 by Michael  
Filed under Computing

According to Jon Peddie Research (JPR), Nvidia has managed to claw back market share from AMD in the second quarter of 2014. JPR found that AMD’s overall unit shipments decreased 7% sequentially, while Intel and Nvidia gained 11.6% and 12.9% respectively. The ‘attach rate’ is almost flat at 155% (up 2%). A total of 32% of PCs tracked last quarter had discrete graphics, while 68% did not.

The PC market grew 6.9% sequentially, but it was down 2.6% year-on-year. Shipments of desktop graphics cards were up 7.8% from last quarter.

“Q3 2014 saw a flattening in tablet sales from the first decline in sales last quarter. The CAGR for total PC graphics from 2014 to 2017 is up to almost 3%. We expect the total shipments of graphics chips in 2017 to be 510 million units. In 2013, 454 million GPUs were shipped and the forecast for 2014 is 468 million,” JPR said.

Shipments of AMD APUs were up 10.5% over the last quarters, but AMD lost 16% in the notebook market. AMD’s discrete GPU shipments were down 19%, but notebook discrete shipments were up 10%. AMD’s overall graphics shipments were down 7%.
Intel’s desktop GPU shipments were stagnant (down 0.3%), but notebook shipments were up by 18.6%.

Nvidia’s desktop discrete shipments were up 24.3% sequentially, while notebook shipments increased 3.5% for an overall increase of 12.9%.

“Year-to-year this quarter AMD’s overall PC shipments decreased 24%, Intel increased 19%, Nvidia decreased 4%, and the others essentially are too small to measure,” the report found.

“Total discrete GPU (desktop and notebook) shipments from the last quarter increased 6.6%, and decreased 7.7% from last year. Sales of discrete GPUs fluctuate due to a variety of factors (timing, memory pricing, etc.), new product introductions, and the influence of integrated graphics. Overall, the trend for discrete GPUs has increased with a CAGR from 2014 to 2017 now of 3%.”

At the moment, an estimated 99% of all Intel chips ship with integrated graphics, compared to 66% of AMD non-server processors.

Courtesy-Fud

Will nVidia Or AMD Ever Produce A 20nm GPU?

November 14, 2014 by Michael  
Filed under Computing

It looks like we might never see 20nm GPUs from either Nvidia or AMD. From what we know, both companies spent a lot of time looking into the new 20nm manufacturing process and they have decided that it is simply not viable for GPUs.

Yields are not where they are supposed to be and from a business perspective it doesn’t make sense to design and produce chips that would end up with very low yields. At this point we do not expect to see any high-end chips in 20nm, as there are obvious manufacturing obstacles and both companies might even skip the 20nm process altogether and move directly to 16nm FinFET.

16nm FinFET GPUs coming in 2016

We expect 16nm FinFET based GPUs sometime 2016 and this manufacturing process will bring some rather innovative products worthy of an upgrade.

One might ask why Apple doesn’t appear to have problems with its 20nm A8 and A8X chips and we might have a partial answer for you. The Apple A8 chip has to stay under 2.5W TDP, the A8X used in the iPad Air 2 A8X has a maximum TDP of 4.5W.

GPUs such as Maxwell- and Hawaii-based parts used in the Geforce GTX 980 and Radeon R9 290X have TDPs in the 150-250W range and the size of the modern GPU is an order of magnitude bigger than the size of an iPhone SoC.

Die size conundrum

The Apple A8 has a die size of 89mm2 and while we can only assume that the more powerful A8X measures over 100 mm2. Nvidia’s 28nm Maxwell GM204 die measures 398 mm2, which is about four and a half times bigger in terms of sheer die size.

To put things in perspective, in a single 20nm 300mm wafer you can place more than 700 A8 dies, while Nvidia can get about 140 Maxwell 204 chips from a 28nm High K 300 mm wafer and in 20nm manufacturing it would be able to get more, as the individual die would be significantly smaller.

However, these 150-250W chips are completely different than low-power SoCs with TDPs of less than 5W. They are worlds apart and one can assume that with the high performance and clock of discrete GPUs, coupled with their sheer size, result in higher leakage and other issues. Making a chip 4.5 times bigger means that there is much more room for potential issues, leakage and yield problems.

Don’t despair, 28nm still has some life in it

Not all is lost. We all saw that Nvidia pulled off a small miracle with the 28nm Maxwell GM204 chip, as this 5.2 billion transistor chip has a TDP of just 165W.

Its predecessor, the Geforce GTX 780 based on the GK110 chip, ended up with a 250W TDP with 7.08 billion transistors and a massive 561mm2 die size. Maxwell is also faster than Kepler, at least in this iteration, yet they are both 28nm products.

We expect that AMD’s upcoming Fiji GPU to be substantially more efficient than the Hawaii XT chip used in last year’s Radeon R9 290X. However, the new part is coming in 2015.

Courtesy-Fud

Is TSMC’s 16nm FinFet At Risk?

November 13, 2014 by Michael  
Filed under Computing

TSMC’s next generation 16nm process has reached an important milestone – 16nm FinFET Plus (16FF+) is now in risk production.

Needless to say, 16FF+ comes a few quarters after the 16nm rollout, expected in Q1 2015. TSMC hopes to start churning out 50,000 16FF wafers in Q2 2015. As for the Plus process, it is still more than a year away in terms of availability and it will be followed by 10nm, which is expected to materialise in late 2016.

TSMC says the improved 16FF+ process can deliver a 40% performance boost compared to its planar 20nm SoC process (20SoC), with a 50% reduction in power consumption.

“Our successful ramp-up in 20SoC has blazed a trail for 16FF and 16FF+, allowing us to rapidly offer a highly competitive technology to achieve maximum value for customers’ products,” said Mark Liu, president and Co-CEO for TSMC.

“We believe this new process can provide our customers the right balance between performance and cost so they can best meet their design requirements and time-to-market goals.”

The first 16FF+ chips are expected to tape out in late 2015 and TSMC expects the volume ramp will start in mid-2015.

Courtesy-Fud

 

Is AMD’s Carrizo APU Coming Soon?

November 5, 2014 by Michael  
Filed under Computing

AMD’s upcoming Carrizo APU has appeared in the SiSoftware Sandra and GFXBench database.

The Carrizo-L processor (AMD Eng Sample: 2M1801C1Y4381_26/18/08/04_9874 (2M 4T 2.6GHz, 1.4GHz IMC, 2x 1MB L2) is clocked at up to 2.6GHz. The new Excavator based CPU is backed by 512-core graphics and the GPU ends up almost twice as fast as previous Radeon R5 on-die solutions.

The results were unearthed by Dutch tech site Hardware.info, which pointed out that the Carrizo sample performs on a par with the A10-7300 Kaveri, despite the fact that the 7300 is clocked at 3.2GHz. This points to a significant IPC gain, as the Carrizo sample ran at 2.6GHz.

Graphics tests yielded mixed results, which may be indicative of driver issues. However, this is to be expected when it comes to upcoming parts, as the launch is still months away, giving AMD ample time to tweak the drivers. For example, the GPU core runs at 626MHz, but in different tests it was listed as a 64-bit and 128-bit part.

Courtesy-Fud

RedHat Releases Update For CI5 For OpenStack

November 5, 2014 by Michael  
Filed under Computing

Red Hat has released an updated version of its Cloud Infrastructure suite that combines several products to deliver a comprehensive OpenStack-based cloud platform, adding its Satellite 6 lifecycle management tool to the mix.

Launched at the OpenStack Summit in Paris on Monday, Red Hat Cloud Infrastructure 5 brings together the firm’s Red Hat Enterprise Linux (RHEL) OpenStack Platform, CloudForms for managing hybrid cloud deployments, Red Hat Enterprise Virtualisation, and now the Satellite 6 lifecycle management tool which was released in September.

The new release is a comprehensive solution available under a single subscription licence that provides organisations with the tools they need to transform their IT infrastructure from traditional data centre virtualisation to an OpenStack-powered cloud capable of linking with public cloud OpenStack resources, Red Hat said.

“Hybrid environments are simply the reality of today’s IT, and organisations want to get to the cloud on their own terms and timeline. Red Hat Cloud Infrastructure acknowledges that reality,” said Joe Fitzgerald, vice president and general manager for cloud management at Red Hat.

“By bringing software lifecycle and configuration management capabilities that span physical, virtual and cloud systems to users via the addition of Red Hat Satellite, we’re helping to establish Red Hat Cloud Infrastructure as one of the most comprehensive and premier cloud infrastructure solutions for enterprises.”

Satellite 6 enables provisioning and lifecycle management tasks for various Red Hat products, including RHEL, while CloudForms provides cloud management and orchestration capabilities such as self-service portals, chargeback and metering of services across private and public clouds.

Meanwhile, the RHEL OpenStack Platform 5 is itself based on the previous Icehouse release of OpenStack combined with the firm’s RHEL 7 operating system. Red Hat already offers a three-year software support product lifecycle for this platform.

Courtesy-TheInq

Is nVidia’s Tegra K1 Being Shunned?

October 29, 2014 by Michael  
Filed under Mobile

Nvidia Tegra TK1 is being shunned by major phone designers as if it were suffering from ebola, our industry sources have confirmed.

It looks like that 2013 is the year of Qualcomm and that every significant design win has Qualcomm processor inside.

Mediatek is trying the Tegra TK1with the entry level phones but they still have to prove themselves in the mainstream and high end phones that the European or USA phone market craves. They could get there in time, but didn’t manage it in 2013.

Tegra TK1 32-bit quad core managed a few design wins but none of them were in phones. Nvidia is using the chip for its own Jetson TK1 development board that gathered some nice revenues. There was also the Shield tablet, which was not eaten by Hydra, the Acer Chromebook 13, HP Chromebook 14, Lenovo ThinkVision 28 and the XiaoMi MiPad.

The XiaoMi tablet seems to be selling like hotcakes, although, since most of the sales are in China, the word hot cakes should probably be steamed pork buns. The XiaoMi tablet almost resembles Nexus 9 specification, if you look at it in the right light, but sells for half of its price. The Tegra TK1 64 bit, aka Denver, won a design award with the HTC Nexus 9 and this looks like it will sell in buckets. Nvidia also has Google Project Tango tablet, but this won’t sell in any serious numbers as this is more of a developer’s toy rather than a retail product.

However by the end of October 2014 there was no a single phone design win with Tegra K1 32-bit or 64-bit. Nvidia Tegra 4i Gray chip was greeted with a loud sounding yawn when it showed in a Wiko Wax , Blackphone and LG G2 mini LTE for the South American market. None of them was really a huge seller for Nvidia.

The 64 bit Tegra K1 might get some attention but it looks like that phones based on 64 bit Tegra K1 Denver might not show up until early 2015 at the earliest. Meanwhile the Snapdragon 810, Qualcomm’s 64-bit high chip will appear at the Mobile World Congress phone by that time. People are already claiming that the Snapdragon 810 is inside of Samsung Galaxy S6 and we would be surprised if it was not in the LG G3 successor (LG G4) or HTC M8 successor which will probably be dubbed the HTC One M9.

This doesn’t leave Nvidia much space for success in phones but then again Tegra is selling in cars, developers’ boards (such as Jetson Dev Kit), Chromebook and the occasional tablet.

No-one can win in all markets and it seems that Tegra powered Chromebooks perform quite well and that Nvidia is top choice for most car manufactures. However the phone market that might be too hot for Nvidia Tegra TK1 32-bit to handle. We will see if Denver, the 64-bit Tegra K1 or its successor can change things in 2015.

Courtesy-Fud

 

Does Samsung Fear A Processor War?

October 15, 2014 by Michael  
Filed under Computing

Kwon Oh-hyun has said he is not worried about a price war in the semiconductor industry next year even though the firm is rapidly expanding its production volume.

“We’ll have to wait and see how things will go next year, but there definitely will not be any game of chicken,” said Oh-hyun, according to Reuters, suggesting the firm will not take chip rivals head on.

Samsung has reported strong profits for 2014 owing to better-than-expected demand for PCs and server chips. Analysts have also forecast similar results for the coming year, so things are definitely looking good for the company.

It emerged last week that Samsung will fork out almost $15bn on a new chip facility in South Korea, representing the firm’s biggest investment in a single plant.

Samsung hopes the investment will bolster profits in its already well-established and successful semiconductor business, and help to maintain its lead in memory chips and grow beyond the declining sales of its smartphones.

According to sources, Samsung expects its chip production capacity to increase by a “low double-digit percentage” after the facility begins production, which almost goes against the CEO’s claims that it is not looking for a price war.

Last month, Samsung was found guilty of involvement in a price fixing racket with a bunch of other chip makers stretching back over a decade, and was fined €138m by European regulators.

An antitrust investigation into chips used in mobile device SIM cards found that Infineon, Philips and Samsung colluded to artificially manipulate the price of SIM card chips.

Courtesy-TheInq

HP Will Offer ARM 64-bit Processors In Moonshot

October 1, 2014 by Michael  
Filed under Computing

ARM-based processor cartridges for its Moonshot servers, including 64-bit modules for high-performance web caching and integrated digital signal processing (DSP) for specialised tasks such as transcoding and telephony applications.

Available immediately, the new server cartridges represent the fourth “leap”, or release of HP’s Moonshot hardware, which is designed to target very specific applications calling for high-density server deployments rather than the general purpose applications met by HP’s existing Proliant line.

The new modules include the m400, which is a 64-bit cartridge based on the Applied Micro X-Gene server on a chip with eight cores running at up to 2.4GHz, and the m800, based on the 32-bit Keystone 66AK2Hx system on a chip (SoC) from Texas Instruments.

Of the two, the m800 was announced at the end of last year along with the cartridges based on Intel’s Avoton Atom and AMD’s Opteron X2150, but is only now shipping.

As with the existing cartridges, the new hardware is designed for the Moonshot 1500 rack-mount enclosure, which can house up to 45 hot-pluggable cartridge modules.

Reflecting their targeting at specific applications, both of the new cartridge options will come with a suitable software package, according to Iain Stephen, Vice President and General Manager for HP Servers in EMEA.

The m400 will thus ship with Ubuntu Linux, which includes the Juju service orchestration tool and Canonical’s Metal-as-a-Service (MaaS) tool for automatically provisioning bare metal servers.

“If you move to a software defined server world, there isn’t a lot of variation in the deployment, so the fastest way to get customers up and running is to have pre-loaded software,” he told The INQUIRER.

The m800 also comes with Canonical’s Ubuntu Linux operating system. This cartridge is a little more exotic, comprising four separate servers, each based on a TI chip with four Cortex-A15 ARM cores and up to eight TMS320C66x high-performance DSPs apiece.

However, it also ships with software for transcoding and voice recognition processing that makes used of the DSP hardware, according to Stephen.

“So it’s a very packaged piece of technology to run a very specific task for the customer,” he said.

HP’s Moonshot platform is aimed at emerging workloads, many of which are identified by customers and partners working with HP in its Discovery Labs, the firm said.

The most popular niche so far has proven to be running hosted desktops, according to Stephen, typically using the m700 cartridge which integrates four separate servers, each based on a quad-core AMD Opteron X2150 SoC.

“This is a completely new way of doing computing, with a chassis with a number of processors in it for specific tasks, and as a customer you’ve got to have a very good understanding of your software stack to take full advantage,” he said.

The technology is still at the “discovery” phase, he added, but HP expects to see growth in 2015 because there is now a broader range of cartridges targeting different applications.

Courtesy-TheInq

 

RedHat Ups Game With Fedora 21

September 29, 2014 by Michael  
Filed under Computing

RedHat has announced the Fedora 21 Alpha release for Fedora developers and any brave users that want to help test it.

Fedora is the leading edge – some might say bleeding edge – distribution of Linux that is sponsored by Red Hat. That’s where Red Hat and other developers do new development work that eventually appears in Red Hat Enterprise Linux (RHEL) and other Red Hat based Linux distributions, including Centos, Scientific Linux and Mageia, among others. Therefore, what Fedora does might also appear elsewhere eventually.

The Fedora project said the release of Fedora 21 Alpha is meant for testing in order to help it identify and resolve bugs, adding, “Fedora prides itself on bringing cutting-edge technologies to users of open source software around the world, and this release continues that tradition.”

Specifically, Fedora 21 will produce three software products, all built on the same Fedora 21 base, and these will each be a subset of the entire release.

Fedora 21 Cloud will include images for use in private cloud environments like Openstack, as well as AMIs for use on Amazon, and a new image streamlined for running Docker containers called Fedora Atomic Host.

Fedora 21 Server will offer data centre users “a common base platform that is meant to run featured application stacks” for use as a web server, file server, database server, or as a base for offering infrastructure as a service, including advanced server management features.

Fedora 21 Workstation will be “a reliable, user-friendly, and powerful operating system for laptops and PC hardware” for use by developers and other desktop users, and will feature the latest Gnome 3.14 desktop environment.

Those interested in testing the Fedora 21 Alpha release can visit the Fedora project website.

Courtesy-TheInq

nVidia Finally Goes 20nm

September 23, 2014 by Michael  
Filed under Computing

For much of the year we were under the impression that the second generation Maxwell will end up as a 20nm chip.

First-generation Maxwell ended up being branded as Geforce GTX 750 and GTX 750 TI and the second generation Maxwell launched a few days ago as the GTX 980 and Geforce GTX 970, with both cards based on the 28nm GM204 GPU.

This is actually quite good news as it turns out that Nvidia managed to optimize power and performance of the chip and make it one of the most efficient chips manufactured in 28nm.

Nvidia 20nm chips coming in 2015

Still, people keep asking about the transition to 20nm and it turns out that the first 20nm chip from Nvidia in 20nm will be a mobile SoC.

The first Nvidia 20nm chip will be a mobile part, most likely Erista a successor of Parker (Tegra K1).

Our sources didn’t mention the exact codename, but it turns out that Nvidia wants to launch a mobile chip first and then it plans to expand into 20nm with graphics.

Unfortunately we don’t have any specifics to report.

AMD 20nm SoC in 2015

AMD is doing the same thing as its first 20nm chip, codenamed Nolan, is an entry level APU targeting tablet and detachable markets.

There is a strong possibility that Apple and Qualcomm simply bought a lot of 20nm capacity for their mobile modem chips and what was left was simply too expensive to make economic sense for big GPUs.
20nm will drive the voltage down while it will allow higher clocks, more transistors per square millimeter and it will overall enable better chips.

Just remember Nvidia world’s first quad-core Tegra 3 in 40nm was rather hot and making a quad core in 28nm enabled higher performance and significantly better battery life. The same was true of other mobile chips of the era.

We expect similar leap from going down to 20nm in 2015 and Erista might be the first chip to make it to 20nm. A Maxwell derived architecture 20nm will deliver even more efficiency. Needless to say AMD plans to launch 20nm GPUs next year as well.

It looks like Nvidia’s 16nm FinFET Parker processor, based on the Denver CPU architecture and Maxwell graphics won’t appear before 2016.

Courtesy-Fud

Will Intel Debut NUC This Year?

September 17, 2014 by Michael  
Filed under Computing

Everyone is not too happy with Intel’s Next Unit of Computing (NUC) brand that the company came up with for its small form factor desktop replacements at IDF 2012. Intel started shipping these small desktops in early 2013.

NUC started off with Sandy Bridge-based parts codenamed Ski Lake (DCP847SK) and with the Celeron 847 it got quite a lot of attention thanks to more affordable pricing. A year after Intel launched multiple Core i3 based SKUs with Ivy Bridge and this year it introduced models based on Wilson Canyon platform and Haswell CPUs. Affordable Bay Trail models appeared as well.

The latest Intel NUC Kit D54250WYK measures tiny 116.6mm x 112mm x 34.5mm and sells for about 370 USD in states and 300 Euro in Germany or £278 in the UK. Back at IDF 2014, Intel’s biggest developer conference some people close to NUC projects told us that since the launch the project has been success.

It started with 250,000 shipped units in the first generation and grew to half a million units with second generation products. There is a chance that this year Intel might sell as many as one million units as an ultimate goal but shipments in the 750,000 to 1 million range might be more realistic. Even if Intel sells around 750,000 units, it will mean that they managed to triple the market within rather short time.

There will be Braswell and Broadwell fourth generation NUCs coming in 2015, but Intel needs to launch 15W TDP part Broadwell and this happens in Q2 2015 as far as we know. We don’t know if the Braswell NUC comes as soon as Broadwell-U or a bit later, but it is in the works.

This Braswell NUC should be really affordable and should replace the Bay-Trail M based DN2820FYKH powered by the Celeron N2820. Have in mind that this entry level Celeron costs a mere $144 at press time and only needs some RAM and an HDD to work. At its lowest spec 2GB SODIMM sell for as low as $10 and Toshiba has MSATA 62GB drive for as low as $24.95.

This means a small, power efficient machine that can run Windows goes as low as $179. No wonder that they are so popular.

Courtesy-Fud