Subscribe to:

Subscribe to :: TheGuruReview.net ::

Intel’s Mobile Kaby Lake Arrive In Q2 2017 Too

August 26, 2016 by  
Filed under Computing

A  report from the folks at NotebookCheck.net shows an Intel slide detailing some updates to the company’s mobile ULV processor lineup based on the Kaby Lake third-generation 14nm processor lineup.

As we mentioned in July, Intel is describing Kaby Lake mostly as a “2017 platform” and plans to launch some desktop processors in the fourth quarter of this year, but will have larger volumes planned for Q1 2017 and will probably announce them formally at next year’s CES.

Quad-core ULV chips arriving for the first time

With Skylake, Intel currently has its mobile processors separated into four categories – “Y”-series (Core M) for 2-in-1 notebooks, “U”-series for thin and light notebooks, “H”-series for gaming notebooks (with unlocked “HK” and “HQ” quad-core variants). The lineup includes some chips with Intel HD Graphics (listed as “+ 2” for “Tier 2”), while others feature upgraded Intel Iris Graphics (listed as “+ 3e” for “Tier 3”).

 “U” series gets a quad-core 15W design

There will not be any new chip configurations for Core M from Skylake to Kaby Lake, as the new generation will also feature dual-core CPUs with Intel HD Graphics and a 6W TDP. According to the source, however, the Kaby Lake “U” series will be receiving a new quad-core variant with Intel HD Graphics inside a 15W TDP. This will be placed alongside two current dual-core CPUs with Intel Iris Graphics (3e) in 15W and 28W designs.

“H” gaming series gets a quad-core 18W design

The Kaby Lake “H” gaming series will also be receiving a quad-core design with Intel HD Graphics inside a remarkable 18W TDP.

Not much has been reported about Kaby Lake notebook processor lineups yet, other than that the integrated GPUs will be be capable of supporting High Dynamic Range (HDR) content, Wide Color Gamut (Rec.2020) and HDCP2.2 playback. This is a great value for consumers seeking thin and lightweight ultrabook lineups that don’t necessarily have physical room for a dedicated GPU, but who still want to experience 4K Ultra HD and similar resolutions with the benefits of a more complete color spectrum.

Courtesy-Fud

AMD’s 32-Core Send Coming In Q2 2017

August 24, 2016 by  
Filed under Computing

AMD has revealed a heap of details about its 32-core Zen based product – codenamed Naples – and we have a few things to add. 

According to our well-informed sources the engineering samples were expected in Q4 2016 which starts in October. Remember, we were the first to mention Naples in detail in June 2016. Sometimes AMD calls these products  Alpha versions  but it looks like AMD was able to demonstrate the CPU a bit earlier as it did a public demonstration at the event in San Francisco last week. This could have  been a pre-Alpha version that was stable enough to run.   

The beta version will follow Q1 2017 and this CPU should be the pre-final version before the company goes to initial production. There is another step in between called the final/general sample that is expected in Q2 2017 and  followed by initial production.

When a tech company says a product will launch in the second quarter, expect it to happen towards the end. Our best guess is a launch time around Computex 2017. It will take place in the last days of May or the first days of June 2017.

The fact that AMD now supports DDR4 memory, USB 3.1 Gen 2 10Gbps, NVME makes its server portfolio a bit more competitive with Intel’s offering.

AMD’s Michael Clark is expected to give an audience at the Hot chips conference a bit more details about “
A New, High Performance x86 Core Design from AMD” but we doubt that he will talk about the possible launch date in as many details as we did.

According to a well-informed sources the engineering samples were expected in Q4 2016 which starts in October. Sometimes AMD calls these products an Alpha version but it looks like AMD was able to demonstrate the CPU a bit earlier as it did a public demonstration at the event in San Francisco last week. This might be a pre-alpha version that was stable enough to show.   

The beta version is following already in Q1 2017 and this CPU should be the pre-final version before the company goes to initial production. There is another step in between called final / general sample that is expected in Q2 2017 and it is followed by initial production.

When a company says a second quarter for a launch, you should expect it to happen towards the end of it. Our best guess is a launch time around Computex 2017. It will take place in last days of May or first days of June 2017.

http://www.fudzilla.com/news/processors/41376-amd-s-ceo-showcases-8-and-32-core-zen

The fact that AMD now supports DDR4 memory, USB 3.1 Gen 2 10Gbps, NVME makes its server portfolio a bit more competitive with Intel’s offering.

AMD’s Michael Clark is expected to give an audience at the Hot chips conference a bit more details about “A New, High Performance x86 Core Design from AMD” but we doubt that he will talk about the  launch date in as many details as we just have.

Courtesy-Fud

Intel’s Kaby Lake Line-Up Revealed

August 23, 2016 by  
Filed under Computing

Chinese tech website Coolaler posted an extensive list of Intel’s upcoming Kaby Lake desktop processors based on Socket LGA 1151 yesterday.

There are 10 processors in the list, all quad-core parts with TDPs ranging from 35W up to 95W – and only two unlocked models. The lineup is broken up into three segments – “K” series for unlocked parts, “S” series which means “standard” parts without suffixes, and “T” series which means low-power variants.

Core i7 7700K, Core i7 7700 and Core i7 7700T

At the top of the list is the first unlocked model – Core i7 7700K with a 4.2GHz core clock (4.5GHz Boost), four cores, eight threads, an 8MB cache and 95W TDP. This is followed by two variants, the Core i7 7700 3.6GHz and Core i7 7700T 2.9GHz.

Core i5 7600K, Core i5 7600 and Core i5 7600T

The next unlocked model is the Core i5 7600K with a 3.80GHz core clock (4GHz Boost), four cores, four threads, a 6MB cache and 95W TDP. This is followed by two variants, the Core i5 7600 3.5GHz and the Core i5 7600T 2.8GHz.

Core i5 7500, Core i5 7500T, Core i5 7400 and Core i5 7400T

At the bottom of the list are four more models – the Core i5 7500 with a 3.4GHz core clock, the Core i5 7500T with a 2.7GHz core clock, the Core i5 7400 with a 3GHz core clock and the Core i5 7400T with a 2.4GHz core clock.

The main difference on the surface between Kaby Lake and Skylake desktop parts is that the clockspeeds seem to be increased. Architecturally speaking, however, the new design should give at least 5 to 10 percent overall performance improvement based on benchmarks released back in May. The chips will also add native USB 3.1 support, native Thunderbolt 3 support, native HDCP 2.2 support, full fixed-function HEVC main10 and VP9 10-bit hardware decoding. In terms of a release date, the source mentions that Kaby Lake mainstream desktop parts have been slightly pushed to early Q1 2017.

As announced earlier this week at the Intel Developer Forum, the company’s current focus is to bring the new architecture to mobile form factors (4W to 15W TDP) this fall for the various shopping seasons beginning with so called “back to school”, before continuing with desktop products in the first quarter of next year.

Courtesy-Fud

Samsung Beats TSMC For FinFET Business

August 16, 2016 by  
Filed under Computing

Tech giant Samsung Electronics  won a contract to make Nvidia GPUs according to South Korea’s Chosun Biz newspaper.

The paper said Samsung would start making the next-generation Pascal GPUs using its 14-nanometre production technology before year-end. It did not specify the value of the order or say how many chips will be made. Samsung and Nvidia are not saying anything.

According to the newspaper Samsung Electronics is currently testing the Nvidia Pascal (Pascal) architecture on new production lines at the S1 campus Giheung, Gyeonggi Province. It is expected that the first GPU from Nvidia will be supplied by Samsung later this year.

Nvidia normally ships this sort of thing through Taiwan’s TSMC but has changed its mind because of recent unstable supply issues and the fact it wants to diversify its production line, the paper claimed.

Courtesy-Fud

 

nVidia Makes Financial Gains

August 16, 2016 by  
Filed under Computing

Nvidia released its financial results yesterday for the second quarter of its fiscal year 2017, which ended July 31, 2016.

The company’s numbers came in slightly better than expected during a quarter fueled by consumer interest in the 4K-capable 16nm Pascal GPU product family, along with increasing enterprise investments in Nvidia’s GPU-accelerated deep learning, computer vision and AI platforms and products.

The company turned in revenues of $1.43 billion with 58.1 percent adjusted gross margin. This is up nine percent from $1.30 billion in Q1 (February 1 to May 1, 2016) and up 24 percent from $1.103 billion a year earlier in Q2 FY2016 (April 27 to July 26, 2015).

Looking ahead at a forecast for Q3 FY2017 – ending late October – the company is issuing guidance between $1.65 and $1.71 billion, or between 57.5 and 58.5 percent adjusted gross margin.

During the company’s Q2 FY2016 ending July 31, 2016, the company unveiled its flagship Geforce GTX 1080 graphics card along with the Geforce GTX 1070 and Geforce GTX 1060. Just two days later after the quarter ended, it released an even higher-performing Pascal enthusiast model – the Geforce GTX Titan X – although revenue numbers from this card will be presented in Q3 FY2017 results in a few months from now.

“Strong demand for our new Pascal-generation GPUs and surging interest in deep learning drove record results,” said Jen-Hsun Huang, co-founder and chief executive officer, NVIDIA. “Our strategy to focus on creating the future where graphics, computer vision and artificial intelligence converge is fueling growth across our specialized platforms — Gaming, Pro Visualization, Datacenter and Automotive.”

As we mentioned a few days ago, the driverless automotive market is expected to grow to $42 billion in nine years and so far the few companies that have signed on with Nvidia’s Drive PX hardware include BMW, Ford, Daimler and Audi. Nvidia is currently working closely with Audi as its primary brand but will soon move to Volkswagen, Seat, Skoda, Lamborghini and Bentley.

“We are more excited than ever about the impact of deep learning and AI, which will touch every industry and market. We have made significant investments over the past five years to evolve our entire GPU computing stack for deep learning. Now, we are well positioned to partner with researchers and developers all over the world to democratize this powerful technology and invent its future,” Jen-Hsun said.

During the quarter ending July 31st, Nvidia also launched the Quadro P6000 workstation GPU with 12 teraflops of compute power, introduced the Tesla P100 accelerator for PCI-E based servers, released its first self-created game called VR Funhouse, and introduced an ultra-high resolution screenshot capture utility called Ansel, although it is limited to a few select games for now.

Courtesy-Fud

Will MediaTek’s Helio X30 SoC Debut In Early 2017?

August 12, 2016 by  
Filed under Computing

TSMC is gearing up to build MediaTek’s new Helio X30 SoC using the 10nm process and it looks like everything will be set for volume production in the first quarter of 2017.

It looks like the chip will be out before TSMC uses the same process to make Apple’s new chips later in 2017. Of course when Apple releases its chip it will try to convince the world that it is the first and it invented the whole process.

TSMC will also offer its backend integrated fan-out (InFO) wafer-level packaging (WLP) technology for Apple’s 10nm A11 chips.. However this timetable it means that hte X30 will really be the ground breaking technology which tests TSMC’s 10nm and MediaTek is taking the biggest risk.

Digitimes said that Qualcomm worked with Samsung Electronics to produce its next-generation Snapdragon 830 chips using its 10nm technology and that TSMC lost the orders for Qualcomm’s Snapdragon 820 series to Samsung.

TSMC told its July investors meeting that its 10nm process will start generating revenues in the first quarter of 2017. The node has received product tape-outs from three clients, and more tape-outs are expected to come later in 2016, the foundry said.

Courtesy-Fud

 

Will Other Hop On The Pokeman Go Bandwagon?

August 11, 2016 by  
Filed under Gaming

Pokemon Go is the only thing anyone wants to talk about. Even people who don’t want to talk about Pokemon Go end up talking about it all the time, if only to tell everyone how sick they are of people talking about Pokemon Go. Social networks are full of Pokemon Go, going out for a drink is now impossible without occasional interruptions as a buzzing phone signals the possible arrival of a rare beast, and comparisons of recent prized acquisitions have replaced complaints about the weather as smalltalk.

It’s not just your social group that’s talking about Pokemon Go, though. Damned near every conversation I’ve had within the industry in recent days has turned to Pokemon Go at some point. The games industry has produced some remarkable social phenomena in recent decades – Grand Theft Auto 3, Halo and Angry Birds all spring to mind as games that leapt across the boundaries to ignite the mainstream imagination, at least for a time – but none has been as fast, as widespread or as visible as Pokemon Go. It’s inevitable, then, that business people across the industry find themselves wondering how to help themselves to a slice of this pie.

Behind the headlines about the game itself, there’s another story building steam. Some investors and venture capitalists are hunting for the “next Pokemon Go”, or a “Pokemon Go killer”; developers are frantically preparing pitches and demos to that effect; IP holders are looking at their own franchises and trying to figure out which ones they could “do a Pokemon Go” with. I know of several investor meetings in the past week alone in which developers of quite different games were needled to push their titles towards mobile AR in an effort to replicate the success of Pokemon Go.

This is an ill-advised direction, to say the very least. From a creative standpoint, it’s hard not to roll one’s eyes, of course; this bandwagon-hopping occurs after every major hit game earns its success. For a couple of years after any truly huge game captures the industry’s imagination, it seems that the only words investors want to hear are “it’s like that hit game you think you understand, but with something extra”. Sometimes that’s not a bad thing; “it’s like Grand Theft Auto but with superpowers” was probably the pitch line for the excellent Crackdown, while “it’s like Grand Theft Auto but we drink more heavily in our design meetings” was probably not the pitch line for Saints Row, but should have been. This approach does also yield more than its fair share of anaemic clones of great games, but it has its merits, not least in being a clear way of communicating an idea to people who may not be experts in game design.

In the instance of Pokemon Go, however, there’s a really fundamental problem with the bandwagon jumping. Even as third parties fall over themselves to figure out how to hop aboard the Pokemon Go bandwagon, the fact is that we don’t even know if this bandwagon is rolling yet. Pokemon Go is a free-to-play mobile game, which means that its phenomenal launch is only the first step. In F2P, a great launch is not a sign of success, it’s a sign of potential; the hard work, and the real measure of a game’s success, is what comes next.

To put this in blunt terms, Pokemon Go has just managed to attract the largest audience of any mobile game within weeks of its launch – and it could just as readily find itself losing that audience almost in its entirety within a few weeks. If that happens, those enormous download numbers and the social phenomenon that has built up around the game will be almost meaningless. Mobile games make their money over long periods of time and rely upon engaging players for months; a mobile game that’s downloaded by millions, but is only being played by thousands within a few weeks, is not a success, it’s a catastrophic case study in squandered potential.

I’m not necessarily saying that this will happen to Pokemon Go – though there are warning signs there already, which I’ll get to in a moment – I’m saying, rather, that it could happen to Pokemon Go, and that it’s therefore vastly premature for anyone to be labelling this as a model for success or chasing after it with their own mobile AR titles. There are shades of what happened with VR, where Facebook’s acquisition of Oculus drove ludicrous amounts of capital into some very questionable VR startups and projects, inflating a valuation bubble which many investors are now feeling deeply uncomfortable about. Here, the initial buzz for Pokemon Go has sent capital seeking out similar projects long before we actually get any proper feedback on whether the model is sustainable or worthwhile.

There’s actually only one way in which Pokemon Go has been an unqualified success thus far, and that’s in its incredibly powerful validation of the Pokemon brand. Nintendo walks away from this whole affair a winner, no matter what; the extraordinary launch of the game is, as I’ve argued previously, a testament to the huge appeal of Pokemon, the golden age of nostalgia it’s going through, and the clever recognition of its perfect fit to the outdoor, AR-based gameplay of Niantic’s games. The thing is that thus far, we simply can’t tell to what extent Pokemon Go is riding the wave of that brand, and to what extent it’s actually bedding in as a sustainable game with a huge playing (and paying) audience.

I have my own suspicions that Pokemon Go is actually quite troubled on the latter count. Looked at from the standpoint of mobile and F2P game design, the game is severely lacking in the crucial area of player retention. At first, it does a great job; it trickle-feeds new Pokemon to you and filling out the first 100 or so entries in the Pokedex is a fun challenge that keeps players coming back each day. It’s then that things become more problematic. As players reach higher levels, the game applies significantly more friction (not necessarily in fun ways, with Niantic making some very dubious guesses as to the tolerance for frustration of their players) even as the actual reasons for playing start to fade away.

At high levels, finding or evolving new creatures is incredibly rare, and the only other thing for players to do is battling at Pokemon Gyms – which some players find entertaining, but which is a completely disconnected experience from the thing people have been enjoying up to that point, namely exploring and collecting new Pokemon. The idea that players who love exploring and collecting will be motivated by combat at Gyms seems naive, and misunderstands the different motivations different people have for playing games. My suspicion is that on the contrary, lots of players, perhaps a significant majority, will complete as much of their Pokedex as they reasonably can before churning out of the game – a high churn rate that will be exacerbated by the dying down of the “halo” of social media around the game, which inexplicably lacks any social features of its own.

I could be wrong – I’d be very happy to be wrong, in fact – but my sense of where Pokemon Go is headed is that, absent some dramatic updates and changes from Niantic in the coming weeks, the game is destined to be a fad. It will achieve its objective for Nintendo in some regards, establishing the value of the firm’s IP on mobile and probably igniting interest in this year’s upcoming 3DS Pokemon titles, but in the broad scheme of things it’s likely to end up being a fun summer fad that never converts into being a sustainable, long-term business.

In that case, those companies and investors chasing the Pokemon Go dollar with ideas for Pokemon Go killers or Pokemon Go-alikes are running down a blind alley. Crucially, they’re misunderstanding the game’s appeal and value; at the moment, Pokemon Go’s appeal is firmly rooted in its IP, and no other IP is ever going to replicate that in the same way. Digimon might have some appeal within a certain age group; Yokai Watch is largely unknown in the west and its players in Japan skew too young for an outdoor AR game to make much sense; I can think of no other franchise that would fit the “Pokemon Go model” well enough to make for an appealing game. If Pokemon Go turns out to be sustainable, then there’s potential for other companies to start thinking about what to do with this new audience of people who have fallen for mobile AR experiences; but until that happens, every VC dollar or man-hour of design time spent on a “Pokemon Go killer” is most likely being wasted entirely.

Courtesy-GI.biz

 

Samsung Debuts 15TB SSD

August 4, 2016 by  
Filed under Computing

Samsung is shipping its PM1633a SSD which has 15.36TB of storage space however you are not going to get much change out of $10,000.

Samsung now has the drive available at select retailers but at $10,000 it is one of the most expensive SSD storage drives around. Pricing seems to vary too with CDW asking $10,311.99 while SHI wants $9,690 on pre-order. There is a 7.68TB flavour but that is $5,700.

The SSDs are based around 16 of Samsung’s 256Gb TLC 3D V-NAND memory chips. These chips make a 512GB package which are then scaled up. The biggest drive uses 32 of those packages to build the largest of the PM1633a SSDs. The is a new controller specifically for this drive to increase the performance offered. The 15.36TB SSD offers sequential read performance of up to 1200 MB/s and sequential write performance of up to 900 MB/s using a SAS-12Gbps interface.

Random read operations are 195,000 and write speds are 31,000 IPOPs. Those wanting to spend less money and needing less storage can get 480GB, 960GB, 1.92TB, 3.84TB, and 7.68TB models.

Although it looks pricey, actually it works out being cheaper for business running massive data centers. Power consumption is around 11W active and 4.5W idle for the SSDs.

Courtesy-Fud

 

Will nVidia Debut Pascal For Laptops This Month?

August 4, 2016 by  
Filed under Computing

Nvidia will be showing off its Pascal-based discrete notebook GPUs at Gamescom in Europe, on August 17-21.

Digitimes claims that Asustek Computer, MSI, Gigabyte Technology and Clevo are expected to be showing off their latest Pascal based offerings. What is interesting is that they see Europe as the major market for gaming PC products. The number of gamers in the region has been rising rapidly, many gaming PC vendors have been expanding their reach into Europe’s channel and have been sponsoring e-sport teams in Europe.

Apparently Nvidia is unifying its product names and will no longer use the letter M to differentiate its desktop and notebook products. At Gamescom, Nvidia will unveil its GeForce GTX 1080/1070/1060-series GPUs for notebooks.

This means, it seems,  that Nvidia’s desktop and notebook GPUs with the same name will have equal performance, something which is a move away from the past when Nvidia’s notebook GPUs were weaker than its desktop parts. Meanwhile gaming notebooks with existing 980M/970M/960M GPUs are expected to see price cuts.

Courtesy-Fud

 

Is AMD Getting Extremely Close To Samsung?

August 2, 2016 by  
Filed under Computing

AMD’s relationship with GloFlo has always been described as “complicated” and appears to be getting more open.

AMD recently mentioned that it has built hardware directly with Samsung and there is a further option to tap the company in the future for product ramps.

Analyst Patrick Moorhead, of Moor Insights and Security made the announcement after AMD investors questions about where AMD was building most of its hardware became a little more pointed.

AMD has said that it has bought $75 million in wafers from GlobalFoundries in Q2, that number struck Moorhead and co as a bit on the small side.

Moorhead questioned AMD on the deal and was told:

“AMD has strong foundry partnerships and our primary manufacturing partners are GLOBALFOUNDRIES and TSMC. We have run some product at Samsung and we have the option of enabling production with Samsung if needed as part of the strategic collaboration agreement they have with GLOBALFOUNDRIES to deliver 14nm FinFET process technology capacity.”

If AMD has options to build at Samsung that could be a bad sign for GlobalFoundries. After all it only spun off the outfit because it wanted a more agile manufacturing partner. GlobalFoundries struggled with its customer base and AMD had to cancel its Krishna and Wichita parts and move to TSMC.

When GloFo canned its 20nm and 14nm XM nodes and licensed 14nm technology from Samsung only to experience delays with that too.

Getting more out of Samsung might not result in significant volumes but the option to do so will keep GloFo or TSMC clean if they run into ramping or yield problems. GloFo’s licensed version of Samsung’s 14nm could easily be done by Samsung.

Courtesy-Fud

Will Moore’s Law Be Obsolete 2021?

July 28, 2016 by  
Filed under Computing

Transistors will stop shrinking in just five years according to the  2015 International Technology Roadmap for Semiconductors.

After 2021, the report forecasts, it will no longer be economically desirable for companies to continue to shrink the dimensions of transistors in microprocessors. Instead, chip manufacturers will turn to other means of boosting density.

In fact this is the last ITRS roadmap and the end to a more-than-20-year-old coordinated planning effort that began in the United States and was then expanded to include the rest of the world.

However the Semiconductor Industry Association, which represents IBM and Intel said that people are just not interested any more and it will have to do its own work, in collaboration with another industry group, the Semiconductor Research Corporation, to identify research priorities for government. Other ITRS participants will continue on with a new roadmapping effort under a new name, which will be conducted as part of an IEEE initiative called Rebooting Computing.

Analysts say that the difficulty and expense associated with maintaining Moore’s Law research has since resulted in significant consolidation. In 2001 there were 19 companies that were developing and manufacturing logic chips with leading-edge transistors. Now there is just Intel, TSMC, Samsung, and GlobalFoundries.

They can communicate directly to their equipment and materials suppliers and don’t want to sit down and tell their rivals what they are up to.

Semiconductor companies that no longer make leading-edge chips in house rely on the foundries that make their chips to provide advanced technologies. What’s more, he says, chip buyers and designers are increasingly dictating the requirements for future chip generations.

This final ITRS report is titled ITRS 2.0. The name reflects the idea that improvements in computing are no longer driven from the bottom up, by tinier switches and denser or faster memories. Instead, it takes a more top-down approach, focusing on the applications that now drive chip design, such as data centers, the Internet of Things, and mobile gadgets.

The new IEEE roadmap—the International Roadmap for Devices and Systems—will also take this approach, but it will add computer architecture to the mix, allowing for “a comprehensive, end-to-end view of the computing ecosystem, including devices, components, systems, architecture, and software,” according to a recent press release.

Transistor miniaturization was still a part of the long-term forecast as recently as 2014, when the lastITRS report was released. That report predicted that the physical gate length of transistors—an indicator of how far current must travel in the device—and other key logic chip dimensions would continue to shrink until at least 2028. But 3D concepts have gained momentum. The memory industry has already turned to 3D architectures to ease miniaturisation pressure and boost the capacity of NAND Flash. Monolithic 3D integration, which would build layers of devices one on top of another, connecting them with a dense forest of wires.

Moore’s Law just predicted how many transistors can fit in a given area of IC. Company still could make transistors smaller well into the 2020s, but it’s more economic to go 3-D.

Before 3-D integration is adopted, the ITRS predicts that leading-edge chip companies will move away from the FinFET transistor structure. According to the roadmap, chipmakers will leave that in favor of a lateral, gate-all-around device that has a horizontal channel like the FinFET but is surrounded by a gate that extends underneath. After that, transistors will become vertical, with their channels taking the form of pillars or nanowires. The traditional silicon channel will also be replaced by channels made with alternate materials, namely silicon germanium, germanium, and compounds drawn from columns III and V of the periodic table.

The doubling of transistor densities hasn’t been linked to improvements in computing performance for ages anyway. In the good old days shrinking transistors meant faster speeds, but by the 90s the extra metal layers that were added to wire up increasing numbers of transistors were adding significant delays and performance was improved by redesigned chip microarchitectures. In 2000 the main issue was heat because  transistor densities were so high that their heat limited clock speeds. Companies began packing multiple cores on chips to keep things moving.

Courtesy-Fud

 

 

Will AMD Go CPU/GPU In Datacenters?

July 28, 2016 by  
Filed under Computing

AMD is drawing up a cunning plan to build a “super-chip” with a CPU and a GPU in a single box to put the fear of god into Nvidia and Intel in the data centre.

According to PC World the move will put AMD back into the server business, which is pretty much dead in the water at the moment.

Apparently when Zen arrives it wants to merge the CPU with a high-performance GPU to create a mega-chip for high-performance tasks.

AMD CEO Lisa Su said the tech will involve fusing Vega and Zen into one big chip for enterprise servers and supercomputing.

She said the move will come “in time”. “It’s an area where combining the two technologies makes a lot of sense.”

AMD has had a crack at this before. It has already combined full-featured CPUs and GPUs on made-to-order chips for the Xbox One and PlayStation 4. The 5-billion transistor Xbox One chip uses an eight-core AMD CPU code-named Jaguar and a Radeon graphics processor. But this is the first time that it has been talked about as a way of getting itself back into serverland.

Ironically it is possible thanks to the fact that GPUs are being used as co-processors in some of the world’s fastest computers. Google has slipped them into data centers for deep learning tasks. But this is world where Nvidia rules.

The only way for AMD to beat Nvidia and Intel in that space is to fuse the GPU and CPU into a single speedy box. Chances are it would push into the market on price and efficiency based on the concept that companies would only have to buy one chip.

Courtesy-Fud

 

Intel’s Knights Landing Thwarting nVidia Plans

July 27, 2016 by  
Filed under Computing

Comments Off on Intel’s Knights Landing Thwarting nVidia Plans

Bad news for Nvidia as supercomputer maker Cray said that that Intel’s Knights Landing giving Nvidia a run for its money.

Cray’s boss Peter Ungaro whose outfit makes supercomputers based around both Knights Landing and Nvidia gear has hinted that Intel gear is gaining traction.

The second generation Xeon Phi product, codenamed Knights Landing, comes as a stand-alone processor and another one which will be released as a co-processor later on. All this stands in the way of Nvidia’s cunning plans in the market.

Ungaro, the company has a “substantial amount of business” that relies on both Intel’s Knights Landing Xeon Phi part as well as Nvidia Tesla P100. He says he has significant orders for both.
But, he added that orders for systems based on Knights Landing actually exceed the orders for systems that use the Tesla P100. In other words, Knights Landing is already cleaning Tesla’s clock.

Motley Fool thinks that at the moment the market is big enough for both of them Nvidia has reported that its datacentre related sales were up 63 per cent year-over-year. But we can expect Intel to start getting more Chipzillish as it start’s bumping into Nvidia’s sales teams.

It might also start getting interesting when ARM chips start making an impact.

Courtesy-Fud

 

Are Movie Theaters Moving To Virtual Reality?

July 26, 2016 by  
Filed under Around The Net

Samsung’s Gear VR headset has been installed in a what is believed to be the first Virtual Reality popup cinema.

The VIVID VR Cinema has been constructed in Toronto, Canada, where a total of three different films were being shown — The Visitor, where a young couple prepares for the woman’s greatest fear to arrive; Imago, a title about a former dancer in a coma who’s aware of her surroundings; and Sonar, a movie about a drone that discovers a signal on an asteroid.

The cinema is small – only 30 seats. Each has a pair of noise-cancelling headphones and a Gear VR with a Galaxy S7 clipped to the back. Tickets cost $20 for the 40-minutes to watch the three films.

The movies have been carefully crafted to let their viewers to choose different narratives to focus on so even the plot is interactive.

It is expected that more of this type of entertainment will arrive when more content is available. It might be a couple of decades before the first Hollywood blockbuster though.

Courtesy-Fud

 

Is nVidia’s Geforce GTX 1060 Living Up To The Hype?

July 25, 2016 by  
Filed under Computing

As announced earlier, Nvidia has officially lifted the NDA off its Geforce GTX 1060 allowing sites to publish reviews which also means that retailers/e-tailers now have the green light to start selling the new graphics card.

Based on 16nm GP106 GPU, the new Geforce GTX 1060 is the third Nvidia Geforce graphics card based on the new Pascal GPU architecture. The GP106 GPU packs 1280 CUDA cores, 80 TMUs and 48 ROPs and it will be coming with 6GB of GDDR5 memory with a 192-bit memory interface.

The new Nvidia Geforce GTX 1060 Founders Edition, which will be apparently sold only by Nvidia, will work at 1506MHz and 1709MHz for the GPU base and Boost clocks while memory will end up with a reference clock of 8000MHz, which adds up to 192GB/s of memory bandwidth.

The reference Founders Edition comes with a standard blower-style cooler which is somewhat simplified and lacks both heatpipes or vapor-chamber, mostly due to the fact that the GTX 1060 has a 120W TDP. The GTX 1060 needs a single 6-pin PCIe power connector which leaves it plenty of headroom for further overclocking.

Performance-wise, the Geforce GTX 1060 is on par with the GTX 980 4GB, and since it comes with 2GB more VRAM, it is a better choice. More importantly, the Geforce GTX 1060 is faster than the RX 480 in most cases, which is its direct competitor on the market.

Unfortunately, the GTX 1060 lacks SLI support, probably because it would kill the sales of the GTX 1070 and GTX 1080 graphics cards.

Priced at US $299 for the Founders Edition and coming with a MSRP of US $249, the Geforce GTX 1060 is quite impressive, offering more performance than the recently launched Radeon RX 480 and bringing that impressive Pascal power efficiency to the mainstream market.

Hopefully, this will mark the beginning of the price wars in the mainstream graphics card segment and will push the prices closer to the MSRP. Both the RX 480 and the GTX 1060 offer decent performance per buck so it will be a fight to the bitter end.

Courtesy-Fud

 

Next Page »