Subscribe to:

Subscribe to :: TheGuruReview.net ::

AMD Develops The Excavator Processor Specifically For Gamers

February 5, 2016 by Michael  
Filed under Computing

AMD has unveiled a handful of new processors as part of its 2016 desktop refresh, including the first chip based on the Excavator core to target desktop PCs. The firm will also release new motherboards with high-speed USB 3.1 ports and connectors to support M.2 Sata SSDs.

AMD’s new desktop processors are available now, and aimed chiefly at the enthusiast and gamer markets. They comprise three chips fitting into the firm’s FM2+ processor socket infrastructure for mainstream systems.

Two of these chips are based on the Godavari architecture and are APUs featuring Steamroller CPU cores and Graphics Core Next GPU cores. The A10-7860K has four CPU cores and eight GPU cores with a clock speed of 3.6GHz, while the A6-7470K has dual CPU cores and four GPU cores at a clock speed of 3.7GHz. Both have a maximum Turbo speed of 4GHz.

The A10-7860K is not AMD’s top-end chip, coming in below the A10-7870K and the A10-7890K, but it does replace three existing chips in the A10 line-up, the A10-7850K, A10-7700K and A10-7800.

“The interesting thing about the A10-7860K is that it delivers the same high 4GHz Turbo speed, but it is a 65W part, so it delivers comparable performance to the A10-7850K, but we’re dropping 30W,” said AMD client product manager Don Woligroski.

 

The third chip is badged under AMD’s Athlon brand, as it has CPU cores only and does not qualify as an APU. The Athlon X4 845 features four of the new Excavator cores used in the mobile Carrizo platform, clocked at 3.5GHz with a Turbo speed of up to 3.8GHz.

Neither is the Athlon X4 845 at the top of the Athlon stack, but is “more of an efficient, really great low-cost part”, according to Woligroski.

AMD will also deliver new motherboards to complement the latest processors sometime during the first quarter of 2016. These bring support for USB 3.1 Gen2 ports with the new Type-C connector, offering 10Gbps data rates, plus connectors for M.2 SATA SSD modules. M.2 modules are more usually seen in laptop and mobile systems because of their compact size.

Future AMD desktop chips will converge on a common socket infrastructure known as AM4, according to Woligroski. The first processors to use this are likely to be the upcoming Summit Ridge desktop chip and Bristol Ridge APU.

AMD also announced a new heatsink and fan combination for cooling the chips. The AMD Wraith Cooler (below) is claimed to deliver 34 percent more airflow while generating less than a 10th of the noise of its predecessor at 39dbA.

Courtesy-TheInq

 

Are Light Powered Transistors On The Horizon?

February 5, 2016 by Michael  
Filed under Computing

A team of researchers have emerged from their smoke filled labs claiming to have invented a transistor which runs on light rather than applied voltage.

According to Technology Review University of North Carolina in Charlotte say the new transistor controls the electrons to flow through it so that when the lights are on it and turns itself off when it gets dark.

This means that devices can be made smaller than field effect transistors because they don’t require doping in the same way and can be squeezed into smaller spaces. Meanwhile the speeds are faster.

Apparently the idea is not rocket science and is based on the idea that materials have been known to be photoconductive.

What the team has done is create a device which uses a ribbon of cadmium and selenium a couple of atoms thick. This can conduct more than a million times more current when on than off. This is about the same as regular transistors.

Of course it is years away from being a product yet. They still have not worked out how to send light to each transistor and if that will cost more power.

Courtesy-Fud

 

Will MediaTek’s Helio Debut This Year?

February 4, 2016 by Michael  
Filed under Computing

MediaTek’s a Senior Vice President and Chief Financial Officer David Ku has confirmed that the company plans to ship the X30 in 2016.

The X30 has been ephemeral product for quite some time although it had been expected that the X30 would follow the X20 eventually. Ku expects that phones based on Helio P10 and X20 should start to arrive in this quarter.

The majority of the design wins for the performance and mainstream phones for the first half of 2016 will be for last year’s flagship the Helio X10, the upcoming Helio P10 and soon to become new flagship Helio X20.

Ku mentioned during the company’s fourth financial quarter of 2015 that Mediatek will launch a Helio X30 and that this will happen in the second part of the year. This was the time of the year when Mediatek launched the X20.

The X30 will be released in 2016 but the phones will only show up in 2017.

The normal design cycle of the phone usually lasts 12 to 18 months. The Helio P20, the company’s first 16nm SoC is expected in the second half of 2016. With some luck, we might see some device shipping with this new SoC before the end of the year.

MediaTek didn’t give any additional information about the Helio X30, other than to acknowledge its existence. Let’s first see how the Helip X20 and P10 will do this year.

Courtesy-Fud

 

AMD Goes Virtual With GPUs

February 3, 2016 by Michael  
Filed under Computing

AMD has revealed what it claims are the world’s first hardware virtualized GPU products — AMD FirePro S-Series GPUs with Multiuser GPU (MxGPU) technology.

The big idea is to have a product for remote workstation, cloud gaming, cloud computing, and Virtual Desktop Infrastructure (VDI).

In the virtualization ecosystem, key components like the CPU, network controller and storage devices are being virtualized in hardware to deliver optimal user experiences. So far the GPU has been off the list.
AMD MxGPU technology, for the first time, brings the modern virtualization industry standard to the GPU hardware.

AMD MxGPU technology is based on SR-IOV (Single Root I/O Virtualization), a PCI Express standard and brings hardware GPU scheduling logic to the user.

The outfit claims that it preserves the data integrity of Virtualized Machines (VM) and their application data through hardware-enforced memory isolation logic preventing one VM from being able to access another VM’s data.

It also exposes all graphics functionality of the GPU to applications allowing for full virtualization support for not only graphics APIs like DirectX and OpenGL but also GPU compute APIs like OpenCL .

The new AMD FirePro S7150 and AMD FirePro S7150 x2 server graphics cards will combine with OEM offerings to create high-performance virtual workstations and address IT needs of simple installation and operation, critical data security and outstanding performance-per-dollar.

Typical VDI use cases include Computer-Aided Design (CAD), Media and Entertainment, and office applications powered by the industry’s first hardware-based virtualized GPU.

Sean Burke, corporate vice president and general manager, Radeon Technologies Group, AMD said that the AMD hardware virtualization GPU product line is another example of its commitment to offering customers exceptional cutting edge graphics in conjunction with fundamental API software support.

“We created the innovative AMD FirePro S-series GPUs to deliver a precise, secure, high performance and enriched graphics user experience — all provided without per user licensing fees required to use AMD’s virtualized solution.”

Jon Peddie, president, Jon Peddie Research. “The move to virtualization of high-performance graphics capabilities typically associated with standalone workstations only makes sense, and will likely gain significant traction in the coming years.”

Pat Lee, senior director, Remote Experience for Desktop and Application Products, VMware said that AMD FirePro S7150 and AMD FirePro S7150 x2 GPUs complement VMware Horizon by giving more users a richer, more compelling user experience. Systems equipped with AMD FirePro cards can provide VMware Horizon users with enhanced video and graphics performance, benefiting especially those installations that focus on CAD and other 3D intensive applications.”

IT budgets can support for up to 16 simultaneous users with a single AMD FirePro S7150 GPU card which features 8 GB of GDDR5 memory, while up to twice as many simultaneous users (32 in total) can be supported by a single AMD FirePro S7150 x2 card which includes a total of 16 GB of GDDR5 memory (8GB per GPU). Both models feature 256-bit memory bandwidth.

Based on AMD’s Graphics Core Next (GCN) architecture to optimize utilization and maximize performance, the AMD FirePro S7150 and S7150 x2 server GPUs feature:

• AMD Multiuser GPU (MxGPU) technology to enable consistent, predictable and secure performance from virtualized workstations with the world’s first hardware-based virtualized GPU products to enable users with workstation-class experiences matched with full ISV certifications.

• GDDR5 GPU Memory to help accelerate applications and process computationally complex workflows with ease.

• Error Correcting Code (ECC) Memory to ensure the accuracy of computations by correcting any single or double bit error as a result of naturally occurring background radiation.

• OpenCL 2.0 support to help professionals tap into the parallel computing power of modern GPUs and multicore CPUs to accelerate compute-intensive tasks in leading CAD/CAM/CAE and Media & Entertainment applications that support OpenCL allowing developers to take advantage of new GPU features.

• AMD PowerTune is an intelligent power management system that monitors both GPU activity and power draw. AMD PowerTune optimizes the GPU to deliver low power draw when GPU workloads do not demand full activity and delivers the optimal clock speed to ensure the highest possible performance within the GPU’s power budget for high intensity workloads.

AMD FirePro S7150 and S7150 x2 server GPUs are expected to be available from server technology providers in the first half of 2016.

The AMD FirePro S-Series GPUs with MxGPU technology are being exhibited in a Dell server system at SolidWorks World 2016 in Dallas, Texas at the moment.

Courtesy-Fud

 

Is nVidia’s Pascal Finally Coming In April

February 3, 2016 by Michael  
Filed under Computing

The dark satanic rumour mill has been flat out manufacturing hell on earth yarns that Nvidia is about to release a new Pascal GPU soon.

The logic is that Nvidia has the time to counter AMD’s Polaris by pushing out a Pascal GPU sooner than anyone expected.

Kotaku claims that NVIDIA looks set to beat AMD’s Polaris architecture when the new GPU appears. In fact it hinted that AMD brought down the price of the Radeon R9 Nano to $499 to counter this move in the high end of the market.

The latest rumor is that Nvidia will be churning out Pascal architecture in all its GPUs from April. When the new GPUs arrive they will be marketed as “TITAN-grade” which goes to show that they will be replacing the current offerings that are marketed under the “TITAN” brand. As for the main GP100 chip will come with 32GB of VRAM.

These rumors about the GPUs with the Pascal architecture are currently based on shipping manifests that have spotted on the Zauba database in India which deals with products that are imported or exported from the country.

It is thought that Nvidia’s CEO Jen-Hsun Huang will unveil the Pascal GPU in April during the GPU Technology Conference. In fact it is likely that Huang will announce it during his April 4 keynote which is the conference’s first day.

Courtesy-Fud

 

MediaTek Goes LTE CAT 6 On Low End SoCs

January 29, 2016 by Michael  
Filed under Computing

MediaTek appears to be ready to give three more entry level processors LTE Cat 6 so they can mangage a 300 Mbit download and 50 Mbit upload.  We already knew that the high-end deca-core X20 and mainstream eight core P10 were getting LTE Cat 6.

According to the Gizchina website, the company the three new SoCs carry the catchy titles of MT6739, MT6750 and MT6750T. .

The MT6739 will probably replace the MT6735. Both have quad A53 cores but it will mean that the MT6739 will get a Cat 6 upgrade from Cat 4. The MT6739 supports speeds of up to 1.5GHz, 512 KB L2 cache, 1280×720 at 60fps resolution, and video decode to 1080p 30fps with H.264 and 13 megapixel camera. This means it is an entry level SoC for phones that might fit into the $100 price range.

The MT6750 and MT6750T look like twins, only the T version supports full HD 1920×1080 displays. The MT6750 has eight cores, four A53 clocked at 1.5Ghz and four A53 clocked at 1.0GHz and is manufactured on TSMC’s new 28nm High Performance Mobile Computing manufacturing mode. This is the same manufacturing process MediaTek is using for the Helio P10 SoC. The new process allows lower leakage and better overall transistor performance at lower voltage.

The MT6750 SoC supports single channel LPDDR3 666MHz and eMCP up to 4GB. The SoC supports eMMC 5.1, 16 megapixel camera, 1080p 30 fps with both H.264 and H.265 decoding. It comes with an upgraded ARM Mali T860 MP2 GPU with 350 MHz and display support of 1280×720 HD720 ready with 60 FPS. This means the biggest upgrade is the Cat 6 upgrade and it makes sense – most of European and American networks now are demanding a Cat 6 or higher modem that supports carrier aggregation.

This new SOc looks like a slowed down version of Helios P10 and should be popular for entry level Android phones.

 

 

 

Courtesy-Fud

AMD’s GPU Goes Open

January 29, 2016 by Michael  
Filed under Computing

AMD’s top gaming guy Nicolas Thibieroz is starting to open up GPU technology to make sure the technology evolves.

He said that its GPUOpen is the beginning of a new philosophy at AMD and would continue the initiative started with Mantle. Now is time to do even more for developers. Apparently the creation of the Radeon Technology Group led by Raja Koduri was key in turning GPUOpen.

Thibieroz said that innovative results were only possible via the exchange of knowledge that happens within the game development community. While whole conferences are dedicated to this information sharing, it is often in more modest settings that inspiration takes form. Dinner conversations, plan files, developer forums or chats are common catalysts to graphics greatness.

However there are hurdles getting in the way of productivity and innovation. Developers can’t use their R&D investment on both consoles and PC because of the disparity between the two platforms.

Console games use low-level GPU features that may not be exposed on PC at the same level of functionality. This cases less efficient code paths to be implemented on PC instead.
Proprietary libraries or tools chains with “black box” APIs prevent developers from accessing the code for maintenance, porting or optimisations purposes.

“Game development on PC needs to scale to multiple quality levels, including vastly different screen resolutions. Triple monitor setups, 4K support or dual renders for VR rendering require vast amounts of GPU processing power yet brute force rendering only gets you so far. There is still a vast amount of graphics performance still untapped, and it’s time to explore smarter ways to intelligently render those increasing numbers of pixels. “Opening up” the GPU is how we solve this,” Thibieroz wrote.

GPUOpen is composed of two areas: Games & CGI for game graphics and content creation, and Professional Compute for high-performance GPU computing in professional applications.
GPUOpen will provide code and documentation allowing PC developers to exert more control on the GPU.

Current and upcoming GCN architectures, such as Polaris, include many features not exposed today in PC graphics APIs, and GPUOpen aims to empower developers with ways to use some of those features.

In addition to generating quality or performance advantages such access will also enable easier porting from current-generation consoles to the PC platform.
GPUOpen will also make a commitment to open source software.

“The game and graphics development community is an active hub of enthusiastic individuals who believe in the value of sharing knowledge. Full and flexible access to the source of tools, libraries and effects is a key pillar of the GPUOpen philosophy. Only through open source access are developers able to modify, optimize, fix, port and learn from software,” he said.

This will encouraging innovation and the development of amazing graphics techniques and optimisations in PC games.

AMD will start a collaborative engagement with the developer community. GPUOpen software will be hosted on public source code repositories such as GitHub as a way to enable sharing and collaboration. Engineers from different functions will also regularly write blog posts about various GPU-related topics, game technologies or industry news.

Courtesy-Fud

 

Qualcomm Goes 4.5 LTE Pro

January 27, 2016 by Michael  
Filed under Computing

Recently, Qualcomm has published a new corporate presentation detailing its path from 3GPP’s “Release 13” (March 2016) and beyond for LTE networks– also more conveniently known as 4.5G LTE “Advanced Pro” – with a development timeframe between 2016 and 2020.

This will be an “intermediate” standard before the wireless industry continues with “Release 15” in 2020 and beyond, also known as 5G technology. The company intends to make LTE Advanced Pro an opportunity to use up more spectrum before 5G networks launch next decade and wants it to support further backwards-compatibility with existing LTE deployments, extremely low latencies, and unlicensed spectrum access, among many other new features.

In its new 4.5G presentation, Qualcomm has highlighted ten major bullet points that it expects to be present in its next-generation LTE “Advanced Pro” specification. The first point describes delivering fiber-like speeds by using Carrier Aggregation (introduced with “LTE Advanced” networks in 2013) to aggregate both licensed and unlicensed spectrum across more carriers, and to use simultaneous connections to different cell types for higher spectral efficiency (for example: using smaller, single-user pCells combined with large, traditional cell towers).

Qualcomm’s second bullet point is to basically make native use of Carrier Aggregation with LTE Advanced Pro by supporting up to 32 carriers at once across a much fatter bandwidth pipe. This will primarily be achieving using a new development called “Licensed Assisted Access.”

In short, Licensed Assisted Access (LAA) is the 3GPP’s effort to standardize LTE use inside of 5GHz WiFi spectrum. It was introduced in 2015 and allows mobile users to simultaneously use both licensed and unlicensed spectrum bands at the same time. This makes sense from an economic scarcity standpoint, as a fairly large number of channels are available for use in unlicensed bands (more than 500MHz in many regions). This should ultimately allow carriers with “low interference” in unlicensed spectrums to aggregate with licensed band carriers to make the most efficient use of all locally-available spectrum.

Qualcomm says that network traffic can be distributed across both licensed and unlicensed carriers when unlicensed bands are being lightly used. The result is that Licensed Assisted Access (LAA) users win by getting higher throughput and lower latency. In 3GPP Release 14 and beyond, Qualcomm eventually anticipates improving upon LAA with “Enhanced License Assisted Access” (eLAA). This second-generation design will include features such as uplink / downlink aggregation, dual unlicensed-and-licensed connectivity across small cells and large traditional cells, and a further signal complexity reduction for more efficient channel coding and higher data rates.

The company’s third bullet point for LTE Advanced Pro is to achieve “significantly lower latency” – up to ten times lower to be precise – yet still be able to operate on the same bands as current LTE towers. They expect to achieve this primarily through a new Frequency Division Duplexing (FDD) / Time Division Duplexing (TDD) design with significantly lower round-trip times (RTTs) and time transmission intervals (TTIs). We are looking at around 70 microseconds to transmit 14 OFDM data symbols versus the current LTE / LTE-A timeframe of 1 millisecond for the same amount of data. The company also expects to achieve significantly lower latency in TDP/UDP throughput limitations (from current LTE-A peak rates), significantly lower latency in VoIP applications, and significantly lower latency for next-gen automotive LTE connection needs.

The fourth bullet point, and very important, is to increase traffic flexibility by converting uplink resources for offloading downlink traffic. In much more technical terms, Qualcomm will utilize a new “Flexible Duplex” design that has self-contained TDD subframes and a dynamic uplink / downlink pattern adaptive to real-time network traffic instead of a stagnant data stream. We can expect to see this implemented around 3GPP Release 14.

Qualcomm’s fifth bullet point for 4.5G LTE Advanced Pro is to enable many more antennas at the base station level to significantly increase capacity and coverage. In 3GPP Release 13 this will be called “Full Dimension MIMO.” This technology uses elevation beamforming by using a 2D antenna signal array in order to exploit 3D beamforming. Later down the road in 3GPP Releases 14 and beyond, we can expect support for what the company calls higher-order “Massive MIMO.” This will consist of more than 16 antennas in an array and should enable devices to connect to even higher spectrum bands.

The sixth bullet point deals with increasing efficiency for Internet of Things applications, also known as “LTE IoT.” One element of this strategy includes enhanced power-save modes (extended DRX sleep cycles) for small devices. More importantly, this also means beyond 10 years of battery life for certain use cases. The company wants to use more narrowband operating modes (see: 1.4MHz and 180KHz) in order to reduce device costs, and wants to deploy “deeper cellular coverage” with up to 20dB signal attention. Previously, regular LTE and LTE-Advanced would top out around ~50dB for most carriers. Going up to 20dB will certainly make a noticeable difference for many indoor, multi-floor users in corporate environments and for those around heavy foliage, mountain ranges and hillsides in both urban and suburban environments.

The seventh bullet point deals with integrating LTE into the connected cars of the future. Qualcomm calls this “Vehicle-to-Everything” Communications, or V2X for short. The goal is to connect cars at higher-than-gigabit LTE Advanced Pro speeds to one another, and to also connect them with nearby pedestrians and IoT-connected objects in the world around them. Privacy and political issues aside, this will supposedly make our collective driving experiences “safer and more autonomous.” Specifics include “always-on sensing,” vehicle machine learning, vehicle computer vision, “bring your own driver” and vehicle-to-infrastructure communication, all from within the car. The company calls the result of V2X automotive integration “on-device intelligence.”

To further things along with ubiquitous gigabit LTE, Qualcomm also eventually wants you to completely ditch your cable / satellite / fiber optic (FTTN and FTTP) television subscriptions and leverage the speeds of its LTE Advanced Pro technology for a “converged digital TV network.” This means television broadcasts over LTE to multiple devices, simultaneously – basically, an always-on LTE Advanced Pro TV broadcast stream to 4K home televisions, tablets and smartphones for the whole family, all at once and at any time of day.

In the ninth bullet point, Qualcomm is boasting LTE-Advanced Pro’s capability for proximity sensing – without the use of GPS – autonomously. This includes using upgraded cell towers for knowing when friends are nearby and for discovering retail services and events, all without triggering WiFi or GPS modules on your device.

The tenth bullet point is an extension of the last one and uses LTE technologies at large for advanced public safety services (including 9-1-1 emergencies) – all without triggering WiFi or GPS modules for proximity data. This new “LTE Emergency Safety System” deployment will deliver both terrestrial emergency information as well as automotive road hazard information. Qualcomm expects this to emulate current Professional / Land Mobile Radio (PMR / LMR) push-to-talk systems on walkie-talkies.

For now, LTE Category 12 600Mbps (upgrade to current 3GPP Release 12) comes in 2016

While the gigabit-and-higher speeds of 3GPP Release 13 and beyond are still a couple years off, Qualcomm wants to kick things off with an update to 3GPP Release 12 (launched Q2 2014) with 600Mbps downlinks and 150Mbps uplinks achieved through the carrier aggregation technique.

During CES 2016, Qualcomm showed off its new “X12 LTE” modem add-on for the Samsung-made Snapdragon 820 Automotive SoC family, or “Snapdragon 820A” Series. The unit features LTE-Advanced (LTE-A) carrier aggregation (3x in the downlink and 2x in the uplink), comes with a new dual LTE FDD/TDD “Global Mode” capability, and supports dual SIM cards.

The X12 LTE modem features UE Category 12 on the downlink with speeds up to 600Mbps (75MBps) achieved through a transition from 64QAM (Quadrature Amplitude Modulation) in the older UE Category 9 specification (see: Snapdragon 810 modem) to a much higher 256 QAM. It is also possible to enable up to 4 x 4 MIMO on the downlink carrier which results in better bandwidth and improved coverage. New modem uses UE Category 13 on the uplink side for speeds up to 150Mbps (18.75MBps) with 64 QAM. The unit also has LTE-U support (LTE in unlicensed spectrum) to allow it to operate on 2.4GHz and 5GHz unlicensed channels for additional spectrum. Additionally, it can bond both LTE and WiFi links together to boost download speeds with LTE + WiFi Link Aggregation (LWA).

Wikipedia.org – History of 3GPP LTE User Equipment Category releases

Qualcomm has recorded a webinar with FierceWireless all about its roadmap from 4.5G LTE Advanced Pro technology (2016 – 2020) to next-next generation LTE 5G technology (2020 and beyond), which can be found here.

The company will also be present at Mobile World Congress 2016 between February 22nd and 25th in Barcelona, Spain to demonstrate new features of LTE Advanced Pro – including “Enhanced Licensed Assisted Access” (eLAA) and MuLTEfire (“multi-fire”) – at its exhibition booth. Our staff is sure to be present at the event and we look forward to sharing more hands-on demos very soon.

Courtesy-Fud

 

 

Samsung And TSMC Battle It Out

January 25, 2016 by Michael  
Filed under Computing

Samsung and TSMC are starting to slug it out introducing Gen.3 14 and 16-nano FinFET system semiconductor processes, but the cost could mean that smartphone makers shy away from the technology in the short term.

It is starting to look sales teams for the pair are each trying to show that they can use the technology to reduce the most electricity consumption and production costs.

In its yearly result for 2015, TSMC made an announcement that it is planning to enter mass-production system of chips produced by 16-nano FinFET Compact (FFC) process sometime during 1st quarter of this year. TSMC had finished developing 16-nano FFC process at the end of last year. During the announcement TSMC talked up the fact that its 16-nano FFC process focuses on reducing production cost more than before and implementing low electricity.

TSMC is apparently ready for mass-production of 16-nano FFC process sometime during 1st half of this year and secured Huawei’s affiliate called HiSilicon as its first customer.

HiSilicon’s Kirin 950 that is used for Huawei’s premium Smartphone called Mate 8 is produced by TSMC’s 16-nano FF process. Its A9 Chip, which is used for Apple’s iPhone 6S series, is mass-produced using the 16-nano FinFET Plus (FF+) process that was announced in early 2015. By adding FFC process, TSMC now has three 16-nano processors in action.

Samsung is not far behind it has mass-produced Gen.2 14-nano FinFET using a process called LPP (Low Power Plus). This has 15 per cent lower electricity consumption compared to Gen.1 14-nano process called LPE (Low Power Early).

Samsung Electronics’ 14-nano LPP process was seen in the Exynos 8 OCTA series that is used for Galaxy S7 and Qualcomm’s Snapdragon 820. But Samsung Electronics is also preparing for Gen.3 14-nano FinFET process.

Vice-President Bae Young-chang of Samsung’s LSI Business Department’s Strategy Marketing Team said it will use a process similar to the Gen.2 14-nano process.

Both Samsung and TSMC might have a few problems. It is not clear what the yields of these processes are and this might increase the production costs.

Even if Samsung Electronics and TSMC finish developing 10-nano process at the end of this year and enter mass-production system next year, but they will also have to upgrade their current 14 and 16-nano processes to make them more economic.

Even if 10-nano process is commercialized, there still will be many fabless businesses that will use 14 and 16-nano processes because they are cheaper. While we might see a few flagship phones using the higher priced chips, it might be that we will not see 10nm in the majority of phones for years.

 

Courtesy-Fud

 

Intel Going 10nm Next Year

January 25, 2016 by Michael  
Filed under Computing

Intel is reportedly going to release its first 10nm processor family in 2017, expected to be the first of three generations of processors that will be fabbed on the 10nm process.

Guru 3D found a slide which suggest that Chipzilla will not be sticking to its traditional “tick-tock model.” To be fair Intel has been using the 14nm node for two generations so far – Broadwell and Skylake. Kaby Lake processor architecture that is due later this year, will also use 14nm .

The slide tells us pretty much what we expected. The first processor family to be manufactured on a 10nm node will be Cannonlake, expected to launch in the year 2017. The following year, Intel will reportedly launch Icelake processors, again using the same 10nm node. Icelake will be succeeded by Tigerlake in 2019, the third generation of Intel processors using a 10nm silicon fab process. The codename for Tigerlake’s successor is unknown.  When it comes out in 2020 it will use 5nm.

 

architecture CPU series Tick or Tock Fab node Year Released
Presler/Cedar Mill Pentium 4 / D Tick 65 nm 2006
Conroe/Merom Core 2 Duo/Quad Tock 65 nm 2006
Penryn Core 2 Duo/Quad Tick 45 nm 2007
Nehalem Core i Tock 45 nm 2008
Westmere Core i Tick 32 nm 2010
Sandy Bridge Core i 2xxx Tock 32 nm 2011
Ivy Bridge Core i 3xxx Tick 22 nm 2012
Haswell Core i 4xxx Tock 22 nm 2013
Broadwell Core i 5xxx Tick 14 nm 2014 & 2015 for desktops
Skylake Core i 6xxx Tock 14 nm 2015
Kaby lake Core i 7xxx Tock 14 nm 2016
Cannonlake Core i 8xxx? Tick 10 nm 2017
Ice Lake Core i 8xxx? Tock 10 nm 2018
Tigerlake Core i 9xxx? Tock 10 nm 2019
N/A N/A Tick 5 nm 2020

Courtesy-Fud

Samsung Starts Producing 4GB HBM For The Masses

January 22, 2016 by Michael  
Filed under Computing

Samsung has begun mass producing what it calls the industry’s first 4GB DRAM package based on the second-generation High Bandwidth Memory (HBM) 2 interface.

Samsung’s new HBM solution will be used in high-performance computing (HPC), advanced graphics, network systems and enterprise servers, and is said to offer DRAM performance that is “seven times faster than the current DRAM performance limit”.

This will apparently allow faster responsiveness for high-end computing tasks including parallel computing, graphics rendering and machine learning.

“By mass producing next-generation HBM2 DRAM, we can contribute much more to the rapid adoption of next-generation HPC systems by global IT companies,” said Samsung Electronics’ SVP of memory marketing, Sewon Chun.

“Also, in using our 3D memory technology here, we can more proactively cope with the multifaceted needs of global IT, while at the same time strengthening the foundation for future growth of the DRAM market.”

The 4GB HBM2 DRAM, which uses Samsung’s 20nm process technology and advanced HBM chip design, is specifically aimed at next-generation HPC systems and graphics cards.

“The 4GB HBM2 package is created by stacking a buffer die at the bottom and four 8Gb core dies on top. These are then vertically interconnected by TSV holes and microbumps,” explained Samsung.

“A single 8Gb HBM2 die contains over 5,000 TSV holes, which is more than 36 times that of an 8Gb TSV DDR4 die, offering a dramatic improvement in data transmission performance compared to typical wire-bonding based packages.”

Samsung’s new DRAM package features 256GBps of bandwidth, which is double that of an HBM1 DRAM package. This is equivalent to a more than seven-fold increase over the 36GBps bandwidth of a 4Gb GDDR5 DRAM chip, which has the fastest data speed per pin (9Gbps) among currently manufactured DRAM chips.

The firm’s 4GB HBM2 also enables enhanced power efficiency by doubling the bandwidth per watt over a 4Gb GDDR5-based solution, and embeds error-correcting code functionality to offer high reliability.

Samsung plans to produce an 8GB HBM2 DRAM package this year, and by integrating this into graphics cards the firm believes designers will be able to save more than 95 percent of space compared with using GDDR5 DRAM. This, Samsung said, will “offer more optimal solutions for compact devices that require high-level graphics computing capabilities”.

Samsung will increase production volume of its HBM2 DRAM over the course of the year to meet anticipated growth in market demand for network systems and servers. The firm will also expand its line-up of HBM2 DRAM solutions in a bid to “stay ahead in the high-performance computing market”.

Courtesy-TheInq

Intel’s New Core Processor Enhances Security

January 22, 2016 by Michael  
Filed under Computing

Intel has unveiled a new version of its 6th-gen Core family of chips aimed at enterprises.

Launched today, 6th-gen Core for business comprises a choice of the same Skylake chips revealed last year, as well as Intel vPro chips, but packaged up and targeted for business users with new features and a full business device refresh that will see new form factors.

The 6th-gen Core chips remain largely the same, particularly in terms of specs, but one of the biggest new features is the integration of Intel Authenticate, a solution that has been designed to make business systems more secure.

Authenticate is built onto the 6th-gen Core platform and is designed to “dramatically improve identity security” via true multifactor authentication technology. This sees user information, IT policy and credential decisions stored in computer hardware, making it harder for hackers to penetrate systems in the cloud.

The Intel Authenticate firmware can properly identify those trying to access systems and protect authentication factors, such as PIN entry, proximity Bluetooth and biometrics, being accessed by the wrong people.

Another feature new to the 6th-gen Core is Intel Unite that looks to expand workplace transformation solutions by doubling the offering of wireless and wired docking designs.

Intel said that Unite will “transform existing conference rooms” by bringing together the firm’s Core vPro processor functions and wireless capabilities so that workers can interact with meeting content in real time from any location. It is said to make life much easier for the average employee to work seamlessly between the home and the office.

The Core chip family was revealed at IFA in Berlin last year and is made up of Core i3, Core i5 and Core i7 chips aimed at all types of desktop devices across the market, including gaming towers, traditional PC towers, all-in-ones, mini PCs, portable all-in-ones and the Intel Compute stick.

These processors promise up to 60 percent better performance over the previous 5th-gen Core chips, with six times faster 4K video transcoding, 11 times better HD graphics performance, and the ability to be overclocked via full range base clock tuning.

What’s special about Skylake is that it is the first mainstream Intel desktop platform to support DDR4 memory, and is claimed to deliver 30 percent better performance than a three-year-old PC based on Ivy Bridge architecture, 20 percent better performance than a two-year-old PC (Haswell), and 10 percent better performance than a one-year-old PC (Broadwell).

Skylake is the successor to the chipmaker’s Broadwell architecture, and was unveiled at Intel’s Developer Forum last year. It is touted to deliver significant increases in performance, battery life and power efficiency.

Processors based on the Skylake architecture have a new chip design, despite being fabbed on the same 14nm process as Broadwell, making Skylake a ‘tock’ iteration in Intel’s ‘tick-tock’ chip architecture cadence.

Courtesy-TheInq

Is TSMC Going 5nm?

January 20, 2016 by Michael  
Filed under Computing

Foundry TSMC claims it will be ready to roll out its 5nm process technology two years after the launch of its 7nm node..

In a statement,  company co-CEO Mark Liu said he expects to start production of 7nm chips in the first half of 2018.  He did not say if the node would be ready for volume production  or if it or just doing test production by that date. Both would be fairly ambitious.

TSMC’s R&D timeline  for the 5nm process technology suggests it will be ready for launch in the first half of 2020.  It looks like it will be using extreme ultraviolet (EUV) lithography to make them.

Liu claimed that TSMC had made significant progress with EUV to prepare for its likely insertion into 5nm,” Liu indicated.

TSMC expects to qualify the 10 nm node in time for customer tape-outs in the first quarter of 2016, Liu said.

The outfit is predicting that its share of the 14/16nm foundry market segment will rise to more than 70 per cent  in 2016 from about 40 per cent  in 2015. TSMC’s 16nm FinFET processes consisting of 16nm FinFET, 16nm FinFET Plus and 16nm FinFET Compact will account for more than 20 per cent of the foundry’s total wafer revenues in 2016, the company said..

The new 16nm FFC node, a low-power and low-cost version of TSMC’s 16nm FinFET products, will be ready for volume production in the first quarter of 2016, Wei added.

TSMC is on track to move its integrated fan-out (InFO) wafer-level packaging technology to volume production in the second quarter of 2016

. “We do not expect adoption by a large number of customers. However, we do expect a few very large volume customers,” Wei said.

Apple is probably going to be among the first adopting the InFO packaging technology, which many of its users will believe Steve Jobs invented.

Courtesy-TheInq

 

MediaTek Releases SoC For Ultra Blu-ray Players

January 19, 2016 by Michael  
Filed under Computing

MediaTek has shown its MT8581 –  a highly integrated multimedia system-on-chip for Ultra High Definition (4K) Blu-ray players.

CES 2016 was buzzing over Ultra HD, 4K content with High Definition Range (HDR) post processing and  Ultra Definition disks and players. After all UHD 4K disks will be available in seven weeks and Time Warner already announced a few disks the other day.

MediaTek came up with its own solution called MT8581 and we saw it in action.  The quality is great from the short demo we saw.  MediaTek claims that it is the only company with a dedicated chipset for Blu-Ray players. With that in mind, all the first players to launch later this year should have the MediaTek MT8581.

Joe Chen, executive vice president and co-chief operating officer, MediaTek said:

“MediaTek’s expertise in multimedia technology means we understand the coming world of advanced screens. We design solutions so consumers get the most stunning, crystal-clear experience regardless of which format they choose for their viewing experience. UHD with HDR imaging has the potential to revolutionize home viewing as much as the leap from VHS to DVD did, hence our research and development investment.”

The new chip needs to process four times the pixels of the Full HD and need to do the picture processing and HDR. The HDR is to create a bit more realistic lightning and we saw this in computer games a few years ago. It does make the picture look better but it is hard to compare it unless you have two identical TV sets head-to-head playing the same content. HDR reproduces a greater dynamic range of light and dark visuals than is possible with standard digital imaging or photographic techniques, improving the appearance of on-screen contrast.

The MediaTek MT8581 supports Blu-ray, DVD and CD playback, including the latest BD-ROM format, such as BD-Live and BonusView. New Players will be able to scale non-native 4K content including DVDs and Blu-ray to 4K.  The MT8581 features HEVC, H.264 and VP9 4K 60p video decoder for 4K (3840×2160) video content and also MPEG-2, VP8 and VC-1 2K 60p video decoder for legacy 2K (1920×1080 video content).

For audio decoding, it has Advanced Audio Coding (AAC), Dolby Digital, Dolby Digital Plus, Dolby TrueHD, DTS, and DTS-HD master audio, via multi-format decoding, that can support high quality audio streaming.

With three size density and drives up to 100 GB you can get much better quality then a Netflix HDR 4K stream. Netflix requires 25 Mbit/s connection while the 4K Blu-ray specification allows 82Mbit/s, 108Mbit/s and 128Mbit/s data read speed depending on the densities. The initial 4K Blu-ray specification allows for three size densities 50GB single layer, 66GB dual-layer and 100GB triple-layer each with 82Mbit/s, 108Mbit/s and 128Mbit/s data read speeds, respectively.

HDR reproduces a greater dynamic range of light and dark visuals than is possible with standard digital imaging or photographic techniques, improving the appearance of on-screen contrast.

MediaTek plans to mass produce MT8581 for Ultra HD 4K Blu-ray players in the second part of 2016. The company claims it had a big success with two key customers in 2015. Last year MediaTek won contracts from Sony with 4K Android TV powered devices,  many high end sound bars and connected speakers.

MediaTek multimedia division won an exclusive deal for the Android 4K Fire TV box that became available in the latter part of 2015 and sold quite well.

Courtesy-Fud

 

Do PCs Need A Dedicated GPU

January 14, 2016 by Michael  
Filed under Computing

Intel claims that casual or mainstream gamers no longer need a discrete graphics card, because integrated graphics have nearly caught up.

Talking at a J.P. Morgan forum, said Gregory Bryant, vice president and general manager of Intel’s desktop client’s platform said that integrated graphics were getting more powerful on a daily basis.

He said that the top-level graphics processors integrated in Intel’s chips, called Iris and Iris Pro, can outperform 80 percent of discrete graphics chips.

Bryant said that Intel’s graphics were 30 times better than what they were five years ago.

The integrated graphics inside Intel’s latest Core processors code-named Skylake can handle three 4K monitors simultaneously, he saild.

However he thinks that Intel has done a poor job of communicating the benefits of integrated graphics.

While all that might be true, you will still need an external graphics card from AMD or Nvidia for games like Crysis or Witcher. If anyone wants a new virtual reality headsets you will need a pretty good GPU

Bryant said Intel was targeting the gaming market with its top-line gaming Core chips that can be overclocked at desktops with discrete GPUs.

Courtesy-Fud