Subscribe to:

Subscribe to :: TheGuruReview.net ::

IBM Funds Researchers Who Create KiloCore Processor

June 22, 2016 by Michael  
Filed under Computing

Researchers at the University of California, Davis, Department of Electrical and Computer Engineering have developed 1000-core processor which will eventually be put onto the commercial market.

The team, from t developed the energy-efficient 621 million transistor “KiloCore” chip so that it could manage 1.78 trillion instructions per second and since the project has IBM’s backing it could end up in the shops soon.

Team leader Bevan Baas, professor of electrical and computer engineering said that it could be the world’s first 1,000-processor chip and it is the highest clock-rate processor ever designed in a university.

While other multiple-processor chips have been created, none exceed about 300 processors. Most of those were created for research purposes and few are sold commercially. IBM, using its 32 nm CMOS technology, fabricated the KiloCore chip and could make a production run if required.

Because each processor is independently clocked, it can shut itself down to further save energy when not needed, said graduate student Brent Bohnenstiehl, who developed the principal architecture. Cores operate at an average maximum clock frequency of 1.78 GHz, and they transfer data directly to each other rather than using a pooled memory area that can become a bottleneck for data.

The 1,000 processors can execute 115 billion instructions per second while dissipating only 0.7 Watts which mean it can be powered by a single AA battery. The KiloCore chip executes instructions more than 100 times more efficiently than a modern laptop processor.

The processor is already adapted for wireless coding/decoding, video processing, encryption, and others involving large amounts of parallel data such as scientific data applications and datacentre work.

Courtesy-Fud

 

AMD Finally Confirms Polaris Specs

June 21, 2016 by Michael  
Filed under Computing

In an official slides that have leaked, AMD has confirmed most of the specifications for both the Polaris 10 and the Polaris 11 GPUs which will power the upcoming Radeon RX 480, RX 470 and RX 460 graphics cards.

According to the slides published by Computerbase.de, both GPUs are based on AMD’s 4th generation Graphics Core Next (GCN 4.0) GPU architecture, offer 2.8 perf/watt improvement compared to the previous generation, have 4K encode and decode capabilities as well as bring DisplayPort 1.3/1.4 and HDR support.

Powering three different graphics cards, these two GPUs will cover different market segments, so the Polaris 10, codename Ellesmere, will be powering both the Radeon RX 480, meant for affordable VR and 1440p gaming as well as the recently unveiled RX 470, meant to cover the 1080p gaming segment. The Polaris 10 packs 36 Compute Units (CUs) so it should end up with 2304 Stream Processors. Both the RX 480 and RX 470 should be coming with 4GB or 8GB of GDDR5 memory, paired up with a 256-bit memory interface. The Ellesmere GPU offers over 5 TFLOPs of compute performance and should peak at 150W.

The Radeon RX 470 should be based on Ellesmere Pro GPU and will probably end up with both lower clocks as well as less Stream Processors and according to our sources close to the company, should launch with a US $179 price tag, while the RX 480 should launch on 29th of June with a US $199 price tag for a reference 4GB version. Most AIB partners will come up with a custom 8GB graphics cards which should probably launch at US $279+.

The Polaris 11 GPU, codename Baffin, will have 16 CUs and should end up with 1024 Stream Processors. The recently unveiled Radeon RX 460 based on this GPU should come with 4GB of GDDR5 memory paired up with a 128-bit memory interface. The Radeon RX 460 targets casual and MOBA gamers and should provide decent competition to the Geforce GTX 950 as both have a TDP of below 75W and do not need additional PCIe power connectors.

According to earlier leaked benchmarks, AMD’s Polaris architecture packs quite a punch considering both its price and TDP so AMD just might have a chance to get a much needed rebound in the market share.

Courtesy-Fud

 

Changes Finally Coming To Cable’s Required Set-top Boxes?

June 20, 2016 by mphillips  
Filed under Consumer Electronics

The U.S. pay-TV industry put forth a plan to allow more than 50 million subscribers to get rid of costly set-top boxes required to view television and video programs to try and convince federal regulators to abandon more far-reaching reforms.

Tom Wheeler, chairman of the Federal Communications Commission, proposed in January opening the $20 billion cable and satellite TV set-top box market to new competitors and allow consumers to access multiple content providers from a single app or device.

Under the industry proposal unveiled in meetings with the FCC this week, the pay-TV industry would commit to creating apps to allow consumers to watch programs without needing to lease a box and the FCC could implement regulations enforcing the commitment.

Kim Hart, a spokeswoman for Wheeler, said on Friday that he was pleased the “industry has adopted the primary goal of our proposal, to promote greater competition and choice for consumers, and agree it is achievable.”

Wheeler wants to see additional details to “determine whether their proposal fully meets all of the goals of our proceeding,” Hart said.

Wheeler’s proposal has faced criticism from companies like AT&T Inc, Comcast Corp, Twenty-First Century Fox Inc, CBS Corp, Walt Disney Co, Viacom and others, along with more than 150 members of Congress. They have raised copyright, content licensing and other issues.

Opponents fear rivals like Alphabet Inc or Apple Inc could create devices or apps and insert their own content or advertising in cable content.

Wheeler’s proposal would create a framework for device manufacturers and software developers to allow consumers to access content from providers such as Netflix, Amazon.com, Hulu, YouTube and a pay-TV company on a single device or app.

FCC Commissioner Jessica Rosenworcel, a Democrat, praised Wheeler for proposing reforms, but told Reuters “it has become clear the original proposal has real flaws. … We need to find another way forward. So I’m glad that efforts are underway to hash out alternatives.”

The FCC voted 3-2 along party lines in February to advance its plan. A final vote could come as early as August.

 

 

AMD Touts Zen

June 17, 2016 by Michael  
Filed under Computing

AMD has released a short video where its lead system engineer Louis Castro running Doom on its Summit Ridge, Zen-based processor.

This means that the silicon is in good shape and the processor was taped our probably late last year with no major issues. AMD’s CEO Lisa Su has already said that the desktop version shall arrive first, and this was the CPU demonstrated in the video.

Summit Ridge is not an APU and doesn’t have a GPU core. AMD engineers were using a discreet GPU probably from one they found out the back.

The Summit Ridge is an FM4 socket processor and half dozen of them are shown in the video.

 

Courtesy-Fud

 

Will EA Screw-Up The Star Wars Game Franchise?

June 17, 2016 by Michael  
Filed under Gaming

EA has been telling the world how it is going to use the rights it has on Star Wars games – it is going to make a lot of them.

At its E3 2016 press conference today, EA said that DICE and Motive were working on a new version of Star Wars: Battlefront for release in 2017. Visceral Games are creating an action-adventure game with an “original narrative set in the Star Wars universe with all-new characters.”

Respawn Entertainment is developing “a different style of gameplay” which takes place in a different timeline we have yet to explore with our EA Star Wars titles.” In other words, almost every EA studio is flat out making something Star Warish.

And while the company didn’t make any mention of it at the news conference, the preview video it showed fans offered a very brief glimpse of a player wearing a PlayStation VR headset, while an X-Wing’s cockpit was shown on screen. That’s likely to stoke anticipation about a reboot of the classic 1997 title “X-Wing vs. TIE Fighter.”

EA and Lucasfilm signed a multiyear licensing deal in 2013. Due, in large part, to the strength of “Star Wars Battlefront,” EA handily beat its earnings estimate in its most recent quarter. Star Trek Bridge, the simulation of the Bridge inside of an Enterprise, a big VR commitment from EA looks like a fun game too.

Courtesy-Fud

 

Will AMD’s Naples Processor Have 32 Cores?

June 16, 2016 by Michael  
Filed under Computing

AMD’s Zen chip will have as much as 32 cores, 64 threads and more L3 cache than you can poke a stick at.

Codenamed Naples, the chip uses the Zen architecture. Each Zen core has its own dedicated 512kb cache. A cluster [shurely that should be cloister.ed] of Zen cores shares a 8MB L3 cache which makes the total amount of L3 shared cache 64MB. This is a big chip and of course there will be a 16 core variant.

This will be a 14nm FinFET product manufactured in GlobalFoundries and supporting the X86 instruction set. Naples has eight independent memory channels and up to 128 lanes of gen 3 PCIe.  This makes it suitable for fast NVMO memory controllers and drives. Naples also support up to 32 SATA or NVME drives.

If you like the fast network interface, Naples supports 16x10GbE and the controller is integrated, probably in the chipset. Naples is using SP3 LGA server socket.

The first Zen based server / enterprise products will range between a modest 35W TDP to a maximum of 180W TDP for the fastest ones.

There will be dual, quad, sixteen and thirty-two core server versions of Zen, arriving at different times. Most of them will launch in 2017 with a possibility of very late 2016 introduction.

It is another one of those Fudzilla told you so moments. We have already revealed a few Zen based products last year. The Zen chip with Greenland / Vega HBM2 powered GPU with HSA support will come too, but much later.

Lisa Su, AMD’s CEO  told Fudzilla that the desktop version will come first, followed by server, notebook and finally embedded. If that 40 percent IPC happens to be across more than just a single task, AMD has a chance of giving Intel a run for its money.

 

Courtesy-Fud

 

Does Samsung Own The OLED Market?

June 16, 2016 by Michael  
Filed under Around The Net

While the rest of the world is talking about OLED displays, it seems that 95 per cent of the displays made by Samsung.

Samsung appears to have corned the market and made more than  95 per cent of the total shipments in the first quarter (Q1) of 2016.

UBI Research. said that total global OLED shipments surged to 91.3 million units, with Samsung making up a whopping 95 per cent of this number.

OLED panels have become a preferred choice among smartphone manufacturers the light-emitting layers of an OLED are lighter, easier to produce, can be made in larger sizes and do not require backlighting.  OLED can be flexible instead of rigid, which makes it ideal for curved displays and even bendable.

Taking advantage of the demand, Samsung, who is already leading the market, is looking to ramp up its OLED production by increasing its A3 plant production from 15,000 units per month to 105,000 units per month by the year end.

Its closest rival, LG is ramping up its OLED smartphone panel production and may have scored orders from Xiaomi. But Samsung has Huawei and Lenovo under its belt and is reported to be supplying close to 100 million panels to Apple for the iPhone 7 and iPhone 8 next year.

 

Courtesy-Fud

 

Is Something Bigger On The Horizon After Virtual Reality?

June 14, 2016 by Michael  
Filed under Gaming

This weeks E3 won’t be entirely dominated by VR, as some events over the past year have been; there’s too much interest in the prospect of new console hardware from all the major players and in the AAA line-up as this generation hits its stride for VR to grab all the headlines. Nonetheless, with both Rift and Vive on the market and PSVR building up to an autumn launch, VR is still likely to be the focus of a huge amount of attention and excitement at and around E3.

Part of that is because everyone is still waiting to see exactly what VR is going to be. We know the broad parameters of what the hardware is and what it can do – the earliest of early adopters even have their hands on it already – but the kind of experiences it will enable, the audiences it will reach and the way it will change the market are still totally unknown. The heightened interest in VR isn’t just because it’s exciting in its own right; it’s because it’s unknown, and because we all want to see the flashes of inspiration that will come to define the space.

One undercurrent to look out for at E3 is one that the most devoted fans of VR will be deeply unhappy with, but one which has been growing in strength and confidence in recent months. There’s a strong view among quite a few people in the industry (both in games and in the broader tech sector) that VR isn’t going to be an important sector in its own right. Rather, its importance will be as a stepping stone to the real holy grail – Augmented or Mixed Reality (AR / MR), a technology that’s a couple of years further down the line but which will, in this vision of the future, finally reach the mainstream consumer audience that VR will never attain.

The two technologies are related but, in practical usage, very different. VR removes the user from the physical world and immerses them entirely in a virtual world, taking over their visual senses entirely with closed, opaque goggles. AR, on the other hand, projects additional visual information onto transparent goggles or glasses; the user still sees the real world around them, but an AR headset adds an extra, virtual layer, ranging from something as simple as a heads-up display (Google’s ill-fated Glass was a somewhat clunky attempt at this) to something as complex as 3D objects that fit seamlessly into your reality, interacting realistically with the real objects in your field of vision. Secretive AR headset firm Magic Leap, which has raised $1.4 billion in funding but remains tight-lipped about its plans, prefers to divide the AR space into Augmented Reality (adding informational labels or heads-up display information to your vision) and Mixed Reality (which adds 3D objects that sit seamlessly alongside real objects in your environment).

The argument I’m hearing increasingly often is that while VR is exciting and interesting, it’s much too limited to ever be a mainstream consumer product – but the technology it has enabled and advanced is going to feed into the much bigger and more important AR revolution, which will change how we all interact with the world. It’s not what those who have committed huge resources to VR necessarily want to hear, but it’s a compelling argument, and one that’s worthy of consideration as we approach another week of VR hype.

The reasoning has two basis. The first is that VR isn’t going to become a mainstream consumer product any time soon, a conclusion based off a number of well-worn arguments that will be familiar to anyone who’s followed the VR resurgence and which have yet to receive a convincing rebuttal – other than an optimistic “wait and see”. The first is that VR simply doesn’t work well enough for a large enough proportion of the population for it to become a mainstream technology. Even with great frame-rate and lag-free movement tracking, some aspects of VR simply make it induce nausea and dizziness for a decent proportion of people. One theory is that it’s down to the fact that VR only emulates stereoscopic depth perception, i.e. the difference in the image perceived by each eye, and can’t emulate focal depth perception, i.e. the physical focusing of your eye on objects different distances from you; for some people the disparity between those two focusing mechanisms isn’t a problem, while for others, it makes them feel extremely sick.

Another theory is that it’s down to a proportion of the population getting nauseous from physical acceleration and movement not matching up with visual input, rather like getting motion sick in a car or bus. In fact, both of those things probably play a role; either way, the result is that a sizeable minority of people feel ill almost instantly when using VR headsets, and a rather more sizeable number feel dizzy and unwell after playing for extended periods of time. We won’t know just how sizeable the latter minority is until more people actually get a chance to play VR for extended periods; it’s worth bearing in mind once again that the actual VR experiences most people have had to date have been extremely short demos, on the order of 3 to 5 minutes long.

The second issue is simply a social one. VR is intrinsically designed around blocking out the world around you, and that limits the contexts in which it can be used. Being absorbed in a videogame while still aware of the world and the people around you is one thing; actually blocking out that world and those people is a fairly big step. In some contexts it simply won’t work at all; for others, we’re just going to have to wait and see how many consumers are actually willing to take that step on a regular basis, and your take on whether it’ll become a widespread, mainstream behaviour or not really is down to your optimism about the technology.

With AR, though, both of these problems are solved to some extent. You’re still viewing the real world, just with extra information in it, which ought to make the system far more usable even for those who experience motion sickness or nausea from VR (though I do wonder what happens regarding focal distance when some objects appear to be at a certain position in your visual field, yet exist at an entirely different focal distance from your eyes; perhaps that’s part of what Magic Leap’s secretive technology solves). Moreover, you’re not removed from the world any more than you would be when using a smartphone – you can still see and interact with the people and objects around you, while also interacting with virtual information. It may look a little bit odd in some situations, since you’ll be interacting with and looking at objects that don’t exist for other people, but that’s a far easier awkwardness to overcome than actually blocking off the entire physical world.

What’s perhaps more important than this, though, is what AR enables. VR lets us move into virtual worlds, sure; but AR will allow us to overlay vast amounts of data and virtual objects onto the real world, the world that actually matters and in which we actually live. One can think of AR as finally allowing the huge amounts of data we work with each day to break free of the confines of the screens in which they are presently trapped; both adding virtual objects to our environments, and tagging physical objects with virtual data, is a logical and perhaps inevitable evolution of the way we now work with data and communications.

While the first AR headsets will undoubtedly be a bit clunky (the narrow field of view of Microsoft’s Hololens effort being a rather off-putting example), the evolutionary path towards smaller, sleeker and more functional headsets is clear – and once they pass a tipping point of functionality, the question of “VR or AR” will be moot. VR is, at best, a technology that you dip into for entertainment for an hour here and there; AR, at its full potential, is something as transformative as PCs or smartphones, fundamentally changing how pretty much everyone interacts with technology and information on a constant, hourly, daily basis.

Of course, it’s not a zero sum game – far from it. The success of AR will probably be very good for VR in the long term; but if we see VR now as a stepping stone to the greater goal of AR, then we can imagine a future for VR itself only as a niche within AR. AR stands to replace and re imagine much of the technology we use today; VR will be one thing that AR hardware is capable of, perhaps, but one that appeals only to a select audience within the broad, almost universal adoption of AR-like technologies.

This is the vision of the future that’s being articulated more and more often by those who work most closely with these technologies – and while it won’t (and shouldn’t) dampen enthusiasm for VR in the short term, it’s worth bearing in mind that VR isn’t the end-point of technological evolution. It may, in fact, just be the starting point for something much bigger and more revolutionary – something that will impact the games and tech industries in a way even more profound than the introduction of smartphones.

Courtesy-GI.biz

 

Is AMD Outpacing nVidia In The Gaming Space?

June 14, 2016 by Michael  
Filed under Gaming

MKM analyst Ian Ing claims that AMD’s recent gaming refresh was better done than Nvidia’s.

Writing in a research report, Ing said that both GPU suppliers continue to benefit from strong core gaming plus emerging applications for new GPU processing.

However, AMD’s transition to the RX series from the R9 this month is proving smoother than Nvidia’s switch to Pascal architecture from Maxwell.

Nvidia is doing well from new GPU applications such as virtual reality and autonomous driving.

He said that pricing was holding despite a steady availability of SKUs from board manufacturers. Ing wrote that he expected a steeper ramp of RX availability compared to last year’s R9 launch, as the new architecture is lower-risk, given that HBM memory was implemented last year.

Ing upped his price target on Advanced Micro Devices stock to 5 from 4, and on Nvidia stock to 52 from 43. On the stock market today, AMD stock rose 0.9 per cent to 4.51. Nvidia climbed 0.2 per cent to 46.33.

Nvidia unveiled its new GeForce GTX 1080, using the Pascal architecture, on 27 May and while Maxwell inventory was running out, Nvidia customers were experiencing Pascal shortages.

“We would grow concerned if the present availability pattern persists in the coming weeks, which would imply supply issues/shortages,” Ing said.

Courtesy-Fud

 

Amazon To Get Join The Already Crowded Music Streaming Business

June 13, 2016 by mphillips  
Filed under Consumer Electronics

Amazon.com Inc is gearing up to launch a standalone music streaming subscription service, placing it squarely in competition with rival offerings from Apple Inc and Spotify, according to sources familiar with the matter.

The service will be offered at $9.99 per month, in line with major rivals, and it will offer a competitive catalog of songs, the sources said. Amazon is finalizing licenses with labels for the service, which likely will be launched in late summer or early fall, the sources said.

Amazon, which offers a free streaming music service with a limited catalog to subscribers of its Prime shipping and video service, did not respond to a request for comment about the new, full-fledged music plan.

Although it will be a late entrant to the crowded streaming space, Amazon believes a comprehensive music service is important to its bid to be a one-stop shop for content and goods, the sources said.

The new music offering also is intended to increase the appeal of the Amazon Echo, its home speaker, which searches the Internet and orders products from the retailer with voice commands.

“A music service will further increase the daily interactions between Amazon and its customer base,” said former music executive Jay Samit when told about the company’s plan.

The new Amazon effort will compete directly with Apple Music and Spotify, which boast more than 30 million songs. Apple launched its service last year in one of the highest profile signs that listeners wanted subscription services, rather than paying for individual songs or albums.

The service also will diversify Amazon’s subscription offerings and be another step away from a single, annual subscription. Amazon recently began allowing subscribers to Prime to pay monthly, for instance.

Silicon Valley titans such as Apple and Alphabet Inc’s Google have muscled into music streaming in recent years, aiming to weave themselves more tightly into their customers’ daily routines and drive device sales.

Amazon similarly hopes its new service’s tight integration with the Echo will help it stand out and reinforce the speaker’s appeal, the sources said.

Released broadly last year, the Echo has become a surprise hit that rival Google is now seeking to emulate with a speaker of its own.

The move suggests that Amazon will increasingly offer basic media options through Prime while selling additional subscriptions for consumers who want to go deeper. The company recently launched a standalone video service.

The new music service is unlikely to steal many customers from Spotify, but it could pose a threat to other players, said David Pakman, a partner at Venrock who headed early Apple music efforts, when informed of the move.

 

 

Samsung SDI Still Pursuing Tesla To Provide Electric Car Batteries

June 8, 2016 by mphillips  
Filed under Around The Net

Samsung SDI is making progress in its discussions with Tesla Motors to provide batteries for the U.S. automaker’s Model 3 electric car as well as its energy storage products, a source with direct knowledge of the matter told Reuters.

Shares in the Samsung SDI surged to trade 6 percent higher in early afternoon trade, beating the wider market’s 1.1 percent gain.

Tesla, which currently procures its batteries from Japan’s Panasonic Corp, is likely to add Samsung SDI as a supplier should sales exceed expectations, the source said, although he declined to specify what level of sales would clinch a deal for the South Korean company.

Citing “tremendous demand,” Tesla Chief Executive Elon Musk said in April that the automaker planned to boost total vehicle production to 500,000 in 2018 – two years earlier than its original target. Suppliers have said the goal will be difficult to achieve.

Tesla has taken 373,000 orders for its Model 3 – which has a starting price of $35,000, about half its Model S – and has said it would begin customer deliveries in late 2017.

“It remains to be seen whether the orders will translate into actual sales,” the source said. The source declined to be identified as the discussions were confidential.

A Samsung SDI spokesman declined to comment.

 

The IoT Move Appears To Be Short On Security

June 8, 2016 by Michael  
Filed under Computing

The IOActive IoT Security Survey has shone a light on the shoddy side of connected devices and warned that all those things you’ve welcomed into your home will let you down at some point.

They are vulnerable because they connect to things, and anything that can be connected can also be interrupted and interfered with.

The one in 10 number comes from a panel of senior security professionals interviewed by IOActive about the rise of the IoT. These people are concerned that security is lacking in everything from wearables to household appliances.

Half of respondents believe that under 10 per cent of IoT products offer adequate ass coverage, while a staggering 85 per cent believe that less than half of products are secure.

Around two thirds felt that the security was probably better than you get on other products, but we don’t care about them right now.

“Consensus is that more needs to be done to improve the security of all products, but the exponential rate at which IoT products are coming to market, compounded by the expansive risk network created by their often open connectivity, makes IoT security a particular concern and priority,” said Jennifer Steffens, chief executive of IOActive.

“According to Gartner, 21 billion connected things will be in use by 2020. It’s important for the companies that develop these products to ensure security is built in. Otherwise hackers are provided with opportunities to break into not only the products, but potentially other systems and devices they’re connected to.”

The problem is that security is not considered early enough in the design process so it has to be dealt with later, or presumably not at all. Steffens explained that a security stitch in time saves nine.

“Companies often rush development to get products to market in order to gain competitive edge, and then try to engineer security in after the fact,” she said.

“This ultimately drives up costs and creates more risk than including security at the start of the development lifecycle.”

 

 

Courtesy-TheInq

 

Can MediaTek Win In The Car Space?

June 7, 2016 by Michael  
Filed under Computing

MediaTek’s R&D teams are working with European-based car vendors to develop the company’s automotive electronics and virtual reality (VR) offerings.

Digitimes claims that having developed SoCs for smartphones, mobile devices, and connected home appliances, MediaTek is stepping up development of chips solutions for auto electronics and VR applications.

MediaTek is focused on in-car entertainment systems, and will be using its partnership with China-based NavInfo, a digital mapping service provider to help out.

NavInfo will sell subsidiary AutoChips (Hefei) and will also form a strategic alliance in which MediaTek will make an investment of US$100 million.
MediaTek will be developing VR for handsets and will support Google’s Daydream VR platform.

Meanwhile the team is flat out improving its IC solutions for Internet of Things (IoT) and wearable device applications. It is pretty sure that this will become the third largest segment after mobile devices and connected home appliances such as digital TVs. In fact the only two areas that MediaTek does not appear interested in is server and augmented reality (AR) applications.

Courtesy-Fud

 

Micron Announces 3D NAND Based SSDs

June 7, 2016 by Michael  
Filed under Computing

Micron has announced its first client- and OEM-oriented solid-state drives based on 3D NAND, the Micron 1100 and Micron 2100 series.

The Micron 1100 SSD is a more mainstream oriented SSD that will be based on Marvell’s 88SS1074 controller and Micron’s 384Gb 32-layer TLC NAND. Using a SATA 6Gbps interface and available in M.2 and 2.5-inch form-factors, the Micron 1100 should replace Micron’s mainstream M600 series, based on 16nm MLC NAND.

The Micron 1100 SSD will be available in 256GB, 512GB, 1TB and 2TB capacities. It will offer sequential performance of up to 530MB/s for read and up to 500MB/s for write with random 4K performance of up to 92K for read and up to 83K IOPS for write. With such performance, it is obvious that the Micron 1100 series will target mainstream market and be a budget SSD.

The Micron 2100 is an M.2 PCIe NVMe SSD that is actually Micron’s first client oriented PCIe SSD and also the first PCIe SSD based on 3D NAND. Unfortuantely, Micron did not finalize the precise specifications so we still do not have precise performance numbers but it will be available in capacities reaching 1TB.

The Micron 1100 is expected to hit mass production in July so we should expect some of the first drives by the end of the next month. The Micron 2100 will be coming by the end of summer.

Courtesy-Fud

 

Slate Tablet Market Continues Downward Spiral

June 6, 2016 by mphillips  
Filed under Consumer Electronics

Demand for slate-shaped tablets is losing steam even faster than expected.

For all of 2016, global tablet shipments will drop by 9.6% over 2015, market research firm IDC forecast this week, marking the second straight year of decline. In March, IDC had forecast a decline of 6% for this year.

The decline will occur even when newer detachable tablets, often called 2-in-1s, are included with slate tablets, IDC said.

“The impact of the decline of slates is having a bigger impact, faster than we thought. They are not coming back,” said IDC analyst Jean Phillippe Bouchard in an interview.

But Bouchard was quick to add that slates are not disappearing entirely. There will continue to be a robust market for small slate tablets, under 8 inches, that are sold for less than $125 by Amazon and others, primarily for use by children.

“There will also continue to be a slate market for commercial uses in healthcare, education and hospitality, so there are a lot of use cases for slates saying that slates are not going away,” he said. “There will still be a need for slates but not as great as in 2010.” IDC said well over 100 million slate tablets will ship annually through 2020.

As IDC and others have said in the past, slate tablets have saturated the market. “Everyone wanting a slate has one, and there’s very little reason to replace it or upgrade it,” Bouchard added.

IDC pegged the total tablet market of both slates and detachables at 207 million units shipped in 2015, but that figure will decline to about 187 million in 2016. IDC didn’t release its forecast for years beyond 2016, but said the market will continue to decline in 2017 before having a “slight rebound in 2018 and beyond, driven by detachable tablet growth.”