Its becoming more obvious lately that Intel and Microsoft are no longer joined at the hip. Intel is trying desperately to make a dent in the tablet market, and with Windows struggling on those devices, Android is where it’s at.
Intel hopes to see its processors used in 40 million tablets this year, and 80% to 90% of those will be running Google’s Android OS, CEO Brian Krzanich said on Tuesday.
“Our mix of OSes reflects pretty much what you see in the marketplace,” Krzanich said during Intel’s quarterly earnings call.
Most Intel-powered tablets running Android today use the older Medfield and Clover Trail+ chips. More Android tablets running the latest Atom processor, called Bay Trail, will ship later this quarter.
That’s not to say Intel is abandoning Windows — far from it. It’s just going where the market is today. Krzanich said he expects Windows to “grow and gain traction,” and more Intel-based tablets running both Android and Windows will be shown in June at the massive Computex trade show in Taipei.
The first Android-based Bay Trail tablet, the DreamTab, was announced in January, but it hasn’t shipped yet.
Intel is chasing ARM, the U.K. company whose processor designs are used in most tablets today, including those running both Android and Apple’s iOS.
The 40 million Intel tablets that will ship this year will give the company 15% to 20% of the tablet market, Intel CFO Stacy Smith said on the earnings call.
Intel is providing discounts and development funds to tablet makers to reduce the cost of using its chips. It’s looking for growth with the white-box Chinese tablet makers, which are expected to ship up to 130 million tablets this year.
Intel chips are available in some tablets now priced under $99, but most will be priced between $125 and $250, Krzanich said.
Microsoft hasn’t made much of a dent yet in Google’s and Apple’s share of the market, but IDC estimated last month that Windows would have 10.2% of the tablet market by 2017. Dell, Toshiba, Lenovo and Hewlett-Packard have launched Windows 8 tablets with Bay Trail, and Microsoft’s own Surface Pro 2 uses an Intel Core processor, but the tablets haven’t sold well.
“We know you want features that allow you to move as seamlessly as possible between Office Online and the desktop,” wrote Kaberi Chowdhury, an Office Online technical product manager, in a blog post Monday.
Improvements to Excel Online include the ability to insert new comments, edit and delete existing comments, and properly open and edit spreadsheets that contain Visual Basic for Applications (VBA) code.
Meanwhile, Word Online has a new “pane” where users can see all comments in a document, and reply to them or mark them as completed. It also has a refined lists feature that is better able to recognize whether users are continuing a list or starting one. In addition, footnotes and end notes can now be added more conveniently inline.
PowerPoint Online has a revamped text editor that offers a layout view that more closely resembles the look of finished slides, according to Microsoft. It also has improved performance and video functionality, including the ability to play back embedded YouTube videos.
For users of OneNote Online, Microsoft is now adding the ability to print out the notes they’ve created with the application.
Microsoft is also making Word Online, PowerPoint Online and OneNote Online available via Google’s Chrome Web Store so that Chrome browser users can add them to their Chrome App launcher. Excel Online will be added later.
The improvements in Office Online will be rolled out to users this week, starting Monday.
Office Online, which used to be called Office Web Apps, competes directly against Google Docs and other browser-based office productivity suites. It’s meant to offer users a free, lightweight, Web-based version of these four applications if they don’t have the desktop editions on the device they’re using at that moment.
Double Fine has warned indies of the dangers of devaluing their products, citing its new publishing initiative as a way of protecting against that outcome.
In an interview with USgamer, COO Justin Bailey expressed concern over the harmful side-effects of low price-points and deep discounting for indie games. By giving away too much for too little, he warned, indie developers could reach a similar situation as that found in the casual market.
“I think what indies really need to watch out for is not becoming the new casual games,” he said. “I don’t think that’s a problem from the development side. Indies are approaching it as an artform and they’re trying to be innovative, but what’s happening in the marketplace is indies are being pushed more and more to have a lower price or have a bunch of games bundled together.”
Double Fine is publishing MagicalTimeBean’s Escape Goat 2, the first occasion it has assisted another developer in that way, and it won’t be the last. According to Bailey, what seems to be a purely business decision on the surface has a strong altruistic undercurrent.
“Double Fine wants to keep indies premium. You see that in our own games and how we’re positioning them. We fight the urge to just completely drop the price. That’s one of the things we want to encourage in this program. Getting people to stick to a premium price point and to the platforms that allow you to do that.”
“We’re not looking to replace… we’re trying to augment the system,” he replies. “We’re making small strides right now. Costume Quest 2 is a high-budget game. It’s one that I thought it was best to have a publishing partner who can also spend some marketing funds around it.”
Double Fine is not the first developer to express concern over the tendency among indies to drastically lower prices.
In January, Jason Rohrer published an article imploring developers to consider the loyal fans who buy their games full-price only to see them on sale at a huge discount just a few weeks or months later. Last month, Positech Games’ Cliff Harris went further, suggesting that low price-points actually change the way players see and interact with the games they purchase.
The Intel Education 2-in-1 hybrid has a 10.1-inch screen that can detach from a keyboard base to turn into a tablet. Intel makes reference designs, which are then replicated by device makers and sold to educational institutions.
The 2-in-1 has a quad-core Intel Atom processor Z3740D, which is based on the Bay Trail architecture. The battery lasts about eight hours in tablet mode, and three more hours when docked with the keyboard base, which has a second battery.
Intel did not immediately return requests for comment on the estimated price for the hybrid or when it would become available.
Education is a hotly contested market among computer makers, as Apple pushes its iPads and MacBooks while PC makers like Dell, Hewlett-Packard and Lenovo hawk their Chromebooks.
Some features in the Intel 2-in-1 are drawn from the company’s Education tablets, which also run on Atom processors, but have the Android OS.
The 2-in-1 hybrid has front-facing and rear-facing cameras, and a snap-on magnification lens that allows students to examine items at a microscopic level.
The computer can withstand a drop of 70 centimeters, a feature added as protection for instances in which children mishandle laptops and let them fall. The keyboard base also has a handle.
The screen can be swiveled and placed on the keyboard, giving it the capability of a classic convertible laptop. This feature has been drawn from Intel’s Classmate series of education laptops.
The 2-in-1 has software intended to make learning easier, including tools for the arts and science. Intel’s Kno app provides access to 225,000 books. Typically, some of the books available via Kno are free, while others are fee-based.
With Amazon’s Fire TV device the first out the door, the second wave of microconsoles has just kicked off. Amazon’s device will be joined in reasonably short order by one from Google, with an app-capable update of the Apple TV device also likely in the works. Who else will join the party is unclear; Sony’s Vita TV, quietly soft-launched in Japan last year, remains a potentially fascinating contender if it had the right messaging and services behind it, but for now it’s out of the race. One thing seems certain, though; at least this time we’re actually going to have a party.
“Second wave”, you see, rather implies the existence of a first wave of microconsoles, but last time out the party was disappointing, to say the least. In fact, if you missed the first wave, don’t feel too bad; you’re in good company. Despite enthusiasm, Kickstarter dollars and lofty predictions, the first wave of microconsole devices tanked. Ouya, Gamestick and their ilk just turned out to be something few people actually wanted or needed. Somewhat dodgy controllers and weak selections of a sub-set of Android’s game library merely compounded the basic problem – they weren’t sufficiently cheap or appealing compared to the consoles reaching their end-of-life and armed with a vast back catalogue of excellent, cheap AAA software.
“The second wave microconsoles will enjoy all the advantages their predecessors did not. They’ll be backed by significant money, marketing and development effort, and will have a major presence at retail”
That was always the reality which deflated the most puffed-up “microconsoles will kill consoles” argument; the last wave of microconsoles sucked compared to consoles, not just for the core AAA gamer but for just about everyone else as well. Their hardware was poor, their controllers uncomfortable, their software libraries anaemic and their much-vaunted cost savings resulting from mobile game pricing rather than console game pricing tended to ignore the actual behaviour of non-core console gamers – who rarely buy day-one software and as a result get remarkably good value for money from their console gaming experiences. Comparing mobile game pricing or F2P models to $60 console games is a pretty dishonest exercise if you know perfectly well that most of the consumers you’re targeting wouldn’t dream of spending $60 on a console game, and never have to.
Why is the second wave of microconsoles going to be different? Three words: Amazon, Google, Apple. Perhaps Sony; perhaps even Samsung or Microsoft, if the wind blows the right direction for those firms (a Samsung microconsole, sold separately and also bundled into the firm’s TVs, as Sony will probably do with Vita TV in future Bravia televisions, would make particular sense). Every major player in the tech industry has a keen interest in controlling the channel through which media is consumed in the living room. Just as Sony and Microsoft originally entered the games business with a “trojan horse” strategy for controlling living rooms, Amazon and Google now recognise games as being a useful way to pursue the same objective. Thus, unlike the plucky but poorly conceived efforts of the small companies who launched the first wave of microconsoles, the second wave is backed by the most powerful tech giants in the world, whose titanic struggle with each other for control of the means of media distribution means their devices will have enormous backing.
To that end, Amazon has created its own game studios, focusing their efforts on the elusive mid-range between casual mobile games and core console games. Other microconsole vendors may take a different approach, creating schemes to appeal to third-party developers rather than building in-house studios (Apple, at least, is almost guaranteed to go down this path; Google could yet surprise us by pursuing in-house development for key exclusive titles). Either way, the investment in software will come. The second wave of microconsoles will not be “boxes that let you play phone games on your TV”; at least not entirely. Rather, they will enjoy dedicated software support from companies who understand that a hit exclusive game would be a powerful way to drive installed base and usage.
Moreover, this wave of microconsoles will enjoy significant retail support. Fire TV’s edge is obvious; Amazon is the world’s largest and most successful online retailer, and it will give Fire TV prime billing on its various sites. The power of being promoted strongly by Amazon is not to be underestimated. Kindle Fire devices may still be eclipsed by the astonishing strength of the iPad in the tablet market, but they’re effectively the only non-iPad devices in the running, in sales terms, largely because Amazon has thrown its weight as a retailer behind them. Apple, meanwhile, is no laggard at retail, operating a network of the world’s most profitable stores to sell its own goods, while Google, although the runt of the litter in this regard, has done a solid job of balancing direct sales of its Nexus handsets with carrier and retail sales, work which it could bring to bear effectively on a microconsole offering.
In short, the second wave microconsoles will enjoy all the advantages their predecessors did not. They’ll be backed by significant money, marketing and development effort, and will have a major presence at retail. Moreover, they’ll be “trojan horse” devices in more ways than one, since their primary purpose will be as media devices, streaming content from Amazon, Google Play, iTunes, Hulu, Netflix and so on, while also serving as solid gaming devices in their own right. Here, then, is the convergence that microconsole advocates (and the rather less credible advocates of Smart TV) have been predicting all along; a tiny box that will stream all your media off the network and also build in enough gaming capability to satisfy the mainstream of consumers. Between the microconsole under the TV and the phone in your pocket, that’s gaming all sewn up, they reckon; just as a smartphone camera is good enough for almost everyone, leaving digital SLRs and their ilk to the devoted hobbyist, the professional and the poseur, a microconsole and a smartphone will be more than enough gaming for almost everyone, leaving dedicated consoles and gaming PCs to a commercially irrelevant hardcore fringe.
There are, I think, two problems with that assessment. The first is the notion that the “hardcore fringe” who will use dedicated gaming hardware is small enough to be commercially irrelevant; I’ve pointed out before that the strong growth of a new casual gaming market does not have to come at the cost of growth in the core market, and may even support it by providing a new stream of interested consumers. This is not a zero-sum game, and will not be a zero-sum game until we reach a point where there are no more non-gaming consumers out there to introduce to our medium. Microconsoles might do very well and still cause not the slightest headache to PlayStation, Xbox or Steam.
The second problem with the assessment is a problem with the microconsoles themselves – a problem which the Fire TV suffers from very seriously, and which will likely be replicated by subsequent devices. The problem is control.
Games are an interactive experience. Having a box which can run graphically intensive games is only one side of the equation – it is, arguably, the less important side of the equation. The other side is the controller, the device through which the player interacts with the game world. The most powerful graphics hardware in the world would be meaningless without some enjoyable, comfortable, well-designed method of interaction for players; and out of the box, Fire TV doesn’t have that.
Sure, you can control games (some of them, anyway) with the default remote control, but that’s going to be a terrible experience. I’m reminded of terribly earnest people ten years ago trying to convince me that you could have fun controlling complex games on pre-smartphone phones, or on TV remote controls linked up to cable boxes; valiant efforts ultimately doomed not only by a non-existent business ecosystem but by a terrible, terrible user experience. Smartphones heralded a gaming revolution not just because of the App Store ecosystem, but because it turned out that a sensitive multi-touch screen isn’t a bad way of controlling quite a lot of games. It still doesn’t work for many types of game; a lot of traditional game genres are designed around control mechanisms that simply can’t be shoehorned onto a smartphone. By and large, though, developers have come to grips with the possibilities and limitations of the touchscreen as a controller, and are making some solid, fun experiences with it.
With Fire TV, and I expect with whatever offering Google and Apple end up making, the controller is an afterthought – both figuratively and literally. You have to buy it separately, which keeps down the cost of the basic box but makes it highly unlikely that the average purchaser will be able to have a good game experience on the device. The controller itself doesn’t look great, which doesn’t help much, but simply being bundled with the box would make a bold statement about Fire TV’s gaming ambitions. As it is, this is not a gaming device. It’s a device that can play games if you buy an add-on; the notion that a box is a “gaming device” just because its internal chips can process game software, even if it doesn’t have the external hardware required to adequately control the experience, is the kind of notion only held by people who don’t play or understand games.
This is the Achilles’ Heel of the second generation of microconsoles. They offer a great deal – the backing of the tech giants, potentially huge investment and enormous retail presence. They could, with the right wind in their sales, help to bring “sofa gaming” to the same immense, casual audience that presently enjoys “pocket gaming”. Yet the giant unsolved question remains; how will these games be controlled? A Fire TV owner, a potential casual gamer, who tries to play a game using his remote control and finds the experience frustrating and unpleasant won’t go off and buy a controller to make things better; he’ll shrug and return to the Hulu app, dismissing the Games panel of the device as being a pointless irrelevance.
The answer doesn’t have to be “bundle a joypad”. Perhaps it’ll be “tether to a smartphone”, a decision which would demand a whole new approach to interaction design (which would be rather exciting, actually). Perhaps a simple Wiimote style wand could double as a remote control and a great motion controller or pointer. Perhaps (though I acknowledge this as deeply unlikely) a motion sensor like a “Kinect Lite” could be the solution. Many compelling approaches exist which deserve to be tried out; but one thing is absolutely certain. While the second generation of microconsoles are going to do very well in sales terms, they will primarily be bought as media streaming boxes – and will never be an important games platform until the question of control gets a good answer.
Microsoft terminated Windows XP support on Tuesday when it shipped the final public patches for the nearly-13-year-old operating system. Without patches for vulnerabilities discovered in the future, XP systems will be at risk from cyber criminals who hijack the machines and plant malware on them.
During an IRS budget hearing Monday before the House Financial Services and General Government subcommittee, the chairman, Rep. Ander Crenshaw (R-Fla.) wondered why the agency had not wrapped up its Windows XP-to-Windows 7 move.
“Now we find out that you’ve been struggling to come up with $30 million to finish migrating to Windows 7, even though Microsoft announced in 2008 that it would stop supporting Windows XP past 2014,” Crenshaw said at the hearing. “I know you probably wish you’d already done that.”
According to the IRS, it has approximately 110,000 Windows-powered desktops and notebooks. Of those, 52,000, or about 47%, have been upgraded to Windows 7. The remainder continue to run the aged, now retired, XP.
John Koskinen, the commissioner of the IRS, defended the unfinished migration, saying that his agency had $300 million worth of IT improvements on hold because of budget issues. One of those was the XP-to-7 migration.
“You’re exactly right,” Koskinen said of Crenshaw’s point that everyone had fair warning of XP’s retirement. “It’s been some time where people knew Windows XP was going to disappear.”
But he stressed that the migration had to continue. “Windows XP will no longer be serviced, so we are very concerned if we don’t complete that work we’re going to have an unstable environment in terms of security,” Koskinen said.
According to Crenshaw, the IRS had previously said it would take $30 million out of its enforcement budget to finish the migration.
Part of that $30 million will be payment to Microsoft for what the Redmond, Wash. developer calls “Custom Support,” the label for a program that provides patches for critical vulnerabilities in a retired operating system.
Analysts noted earlier this year that Microsoft had dramatically raised prices for Custom Support, which previously had been capped at $200,000 per customer for the first year. Instead, Microsoft negotiates each contract separately, asking for an average of $200 per PC for the first year of Custom Support.
Using that average — and the number of PCs the IRS admitted were still running XP — the IRS would pay Microsoft $11.6 million for one year of Custom Support.
The remaining $18.4 million would presumably be used to purchase new PCs to replace the oldest ones running XP. If all 58,000 remaining PCs were swapped for newer devices, the IRS would be spending an average of $317 per system.
Facebook released its second government requests report covering the second half of 2013, and it expands its scope from the first one in two ways. First, it includes requests to restrict or remove users’ content from the site, whereas the first report was limited to requests for account information. And second, the report now includes data on Instagram, the photo sharing site owned by Facebook.
Facebook is not breaking out the number of Instagram requests; they’re included in the overall tallies. But Instagram’s inclusion speaks to the popularity of the service, which Facebook acquired in 2012 but didn’t include in its government requests report for the first half of 2013.
The report includes data on government requests to receive data about Instagram accounts and to restrict access to its content.
Facebook receives requests to restrict or remove content based on countries’ laws over what can be shared online. When the request is legally sound, Facebook restricts access to content in the specific country whose government objected to it. If Facebook also determines that the flagged content violates its own standards, it removes the content globally. Separately, Facebook also receives requests for account information and data, many of which relate to criminal cases such as robberies or kidnappings.
Facebook does not hand over data every time it receives a government request — sometimes the requests are overly broad or vague, or do not comply with legal standards, the company says.
In the U.S., Facebook received about 12,600 law enforcement requests in the second half of 2013, up from the range of 11,000-12,000 it tallied in its first report. For the second half of 2013, Facebook said it produced data for about 81 percent of the requests.
Regarding U.S. government requests about national security matters, Facebook reported it may have received none or as many as 999, saying it couldn’t be more specific due to U.S. legal restrictions.
Governments in other countries across the world are also interested in Facebook users’ data. India ranked second behind the U.S. with about 3,600 requests targeting more than 4,700 accounts. Facebook produced data for roughly half of those requests.
More than 1,900 requests came from the U.K., while the governments of France, Germany and Italy each served Facebook with more than 1,600 data requests.
Besides Facebook, other companies like Yahoo, Google and Microsoft periodically release their own government request reports, as part of an effort to be more transparent to users. The tallies have taken on increased significance following leaks about U.S. government surveillance made by former contractor Edward Snowden.
“Grey Goo is remarkable not for what it has added to the RTS formula, but what it has stripped away,” PC Gamer wrote in its reveal of Grey Goo, a new real-time strategy game from the veterans at Petroglyph. Perhaps the same could be said of Grey Goo’s recently formed publisher Grey Box, which is seeking to strip away the more negative aspects of game publishing. Suits and creatives typically will bump heads because the two sides are looking at the creation of games from wildly different perspectives. But what if they actually had the same goals?
Ted Morris, executive producer at Petroglyph, felt an immediate kinship with the team at Grey Box. “As a small [studio] – small being 50, 60 people – we are always talking to publishers to see what deals we can put together. But with Grey Box, I think that we meshed better on a personal level with them as a company and as a group of people than we have ever meshed with another group,” he enthused to GamesIndustry International during GDC. “And we’ve worked with Sega and LucasArts – all the big guys – and certainly talked to everybody else, too – the EAs and everybody – and these guys – man, we just gelled with these guys so well.”
Morris said that Grey Box’s approach to publishing was noticeably different from the start. While other, larger publishers may immediately come up with marketing plans and sales targets, Grey Box found itself on the same page with Petroglyph: fun comes first.
“Every meeting that we have is always a sit down and then people open up financial books and they start talking about what the sales figures are going to be like, and when we sit down with [Grey Box], it’s like ‘how can we make a great game?’ We don’t even talk about money, we talk about ‘how good can we make this game?’ and ‘how successful will it be?’ You know, let the game drive the sales, don’t let the marketing drive the sales, don’t let the sales department drive the sales. It’s really about, if you make a great game, they will come,” Morris continued. “They spoke to that so often, so frequently that we thought, ‘man, these guys just want to help us focus on what’s really important.’”
One of the defining traits for publisher Grey Box is that they’re all gamers at heart, noted Josh Maida, executive producer for the publisher.
“I’m not going to pre-judge any of those other publishers – I mean, for all I know they love games as much as we do. And we do. We all love games. We all come from different areas. I lost a whole grade point in college to Street Fighter, and… we want to be fiscally mindful. You need to make money, but with the money we make, we want to make more games,” he remarked.
“So I think at the core of that is we’re not trying to take away from the industry. We want it to feed itself and go bigger. Quality over quantity is something that we’re mindful of. We also just want to make a good working relationship for our partners… everybody’s in here for fulfillment. The talent we work with, they could all be working in private industries for twice the amount they do, but they’re here because they love to make games, and so we want to be mindful of that. And when people die, they want to know they did great things and so we want to create those opportunities for people.”
Tony Medrano, creative director for Grey Box, criticized other publishers for being too quick to just follow another company’s successful formula.
“We’re not chasing a trend, we’re chasing something we believe in, we’re chasing something we like, and we’re not trying to shoehorn a formula or monetization model onto things that just don’t work because they’re popular,” he added. “I think from the get-go, it’s been all about how can we make the best game, and then everything else follows from that. I think a difference structurally [with other publishers] would be that we have a very lean and mean team. We’re not trying to build a skyscraper and have redundant folks. Everybody that’s here really cares, has some bags under their eyes from late nights… I think it is just that we look at all our partners as actual partners. We let them influence and make the product better, whether it’s the IP or the game.”
Speaking of monetization models, Maida commented that there’s no “secret agenda to Zyngafy RTS or anything.” Grey Goo is strictly being made for the PC, but the RTS genre easily lends itself to free-to-play. Upon the mere mention of free-to-play, however, you could almost feel the collective blood pressure in the room rising. It’s clearly not the type of experience that Petroglyph and Grey Box are aiming for.
For Petroglyph’s Morris, in particular, free-to-play hit a nerve. “I’m going to jump in here, sorry. I’m really annoyed!” he began. “There’s been such a gold rush for free-to-play right now that is driving publishers – I mean, there needs to be a good balance. There’s a great place for free-to-play – I play lots of free-to-play games – but it is driving developers like us to focus on money instead of making great game content. I’m not going to name any examples, but I’ve been disappointed with some of the free-to-play offerings because it’s not so much about making a great experience for the player anymore. It’s about ‘how can we squeeze them just a little bit more?’ or annoy them to the point where they just feel like they have to pay.”
Medrano added, “I get frustrated when I play free-to-play games, and if I purchase something, I feel dirty. I feel like ‘oh, I got cheated, I fell for the trap.’ Or even more modern games where they baby you through the whole thing. There’s no more of that, like, ‘this is tough, so that means if I get good at this, there’s reward – there’s something there.’”
Ultimately, while Petroglyph and Grey Box came together thanks to a shared love of the RTS genre, they feel there’s a real opportunity to bring back hardcore, intelligent games.
Andrew Zoboki, lead game designer at Petroglyph, chimed in, “It’s almost as if the industry has forgotten about the intelligent gamer. They feel like that everyone’s going to be shoehorned in there, and I would say even from a design perspective that a lot of design formulas for a lot of things, whether they be free-to-play or what the mainstream is going to, next-gen and such, that all those titles are kind of a little more cookie-cutter than they probably should be. They’ve tried to shoehorn gamers into a formula and say, ‘this is what a gamer is,’ rather than understanding that gamers are a very wide and diverse bunch of individuals, everyone from the sports jock to the highly intellectual, and they all have [different] tastes… there’s different games that will appeal to different demographics… if you make the games that players want to play, they will come.”
And that really is at the heart of it. Morris lamented how business creeps into the games creation equation far too often. “They’re trying to balance the game with Excel spreadsheets instead of sitting down and actually playing it and having focus tests and bringing people in and actually trying to iterate on the fun,” he remarked about other publishers.
For Grey Box at the moment, the focus is on making Grey Goo the best it can be, but the company does have plans for more IP. It’s all under wraps currently, however.
“We do have a roadmap, but it’s not based off of the calendar year. We do have another game in the works right now and we might announce that at E3. And we have a road map for this IP, as well,” Maida said. “Obviously we want to get it in the hands of players and fans to see what they respond to, but we’ve got capital investment in the IP with hopes to not only extend this lineage of RTS’s but possibly grow out that franchise and other genres as well.”
Grey Box plans to release Grey Goo later this year.
Based on the firm’s Kabini system on chip (SoC), the APU is named the “AM1 Platform”, combining most system functions into one chip, with the motherboard and APU together costing around between $39 and $59.
Launched at the beginning of March and released today in North America, AMD’s AM1 Platform is aimed at markets where entry-level PCs are competing against other low-cost devices.
“We’re seeing that the market for these lower-cost PCs is increasing,” said AMD desktop product marketing manager Adam Kozak. “We’re also seeing other devices out there trying to fill that gap, but there’s really a big difference between what these devices can do versus what a Windows PC can do.”
The AM1 Platform combines an Athlon or Sempron processor with a motherboard based on the FS1b upgradable socket design. These motherboards have no chipset, as all functions are integrated into the APU, and only require additional memory modules to make a working system.
The AM1 SoC has up to four Jaguar CPU cores and an AMD Graphics Core Next (GCN) GPU, an on-chip memory controller supporting up to 16GB of DDR3-1600 RAM, plus all the typical system input and output functions, including SATA ports for storage, USB 2.0 and USB 3.0 ports, as well as VGA and HDMI graphics outputs.
AMD’s Jaguar core is best known for powering both Microsoft’s Xbox One and Sony’s Playstation 4 (PS4) games consoles. The AM1 Platform supports Windows XP, Windows 7 and Windows 8.1 in 32-bit or 64-bit architectures.
AMD said that it is going after Intel’s Bay Trail with the AM1 Platform, and expects to see it in small form factor desktop PCs such as netbooks and media-streaming boxes.
“We see it being used for basic computing, some light productivity and basic gaming, and really going after the Windows 8.1 environment with its four cores, which we’ll be able to offer for less,” Kozak added.
AMD benchmarked the AM1 Platform against an Intel Pentium J2850 with PC Mark 8 v2 and claimed it produced double the performance of the Intel processor. See the table below.
The FS1b upgradable socket means that users will be able to upgrade the system at a later date, while in Bay Trail and other low-cost platforms the processor is mounted directly to the motherboard.
AMD lifted the lid on its Kabini APU for tablets and mainstream laptops last May. AMD’s A series branded Kabini chips are quad-core processors, with the 15W A4-5000 and 25W A6-5200 clocked at 1.5GHz and 2GHz, respectively.
Nvidia certainly did one thing right with the Shield gaming console. It has learned that users of such devices really like continuous and regular updates that add features and functionality to their devices.
The effort probably would not be worth it, given the limited number of Shield consoles in the wild, but it demonstrates that Nvidia is committed to the concept. Shield today sells for a rather attractive $199 and offers Gamstream support on your home network as long as you have a 5GHz capable router.
With the latest April Update, Nvidia is offering remote Gamestream support. This is good news but we still have to try this in the field in order to make some conclusion about it. Let’s not forget that Nvidia lets you use the Shield in console mode, playing your games on a big screen TV as long as you have the necessary Bluetooth controller. Grid gaming works for some users depending on the region, with California as the epicentre, but this functionality was enabled before the April update. It is required that your ping stays below 150ms and Nvidia will let you try out a dozen games for free. We tried it and it works nice, as long as you don’t get too far away from the 5G router.
The Shield April update also brings mouse and keyboard support in console mode, and it will make your life easier playing Civilisation V, World of Warcraft and similar games from your couch. Nvidia also updated Game touch mapper making it easier to map your favourite touch based games. You can also download predefined settings from the community profiles. The full support for Android 4.4.2 KitKat is certainly a nice addition. Andrew Conrad, Nvidia tech guy and gaming nerd, the face of Nvidia gaming for the new generation also confirms that Gamestream on the go will work via WiFi, tether, MiFi or Hotspot internet connection.
As we already pointed out, Nvidia is clearly putting a lot of effort into Shield on the software front. This is not always the case with niche products, but Shield is part of a much wider strategy that revolves around streaming, blurring the lines between different platforms. Whether or not upcoming generations of the console can gain a mainstream following remains to be seen.
It appears that AMD’s professional graphics push is finally starting to pay off.
AMD’s graphics business is chugging along nicely, thanks to the success of Hawaii-based high-end cards, solid sales of rebranded mainstream cards, plenty of positive Mantle buzz and of course the cryptocurrency mining craze, which is winding down.
However, AMD traditionally lags behind Nvidia in two particular market segments – mobile graphics and professional graphics. Nvidia still has a comfortable lead in both segments and its position in mobile is as strong as ever, as it scored the vast majority of Haswell design wins in 2013. However, AMD is fighting back in the professional market and it is slowly gaining ground.
Mac Pro buckets boost FirePro sales
Last year AMD told us at the sidelines of its Hawaii launch event that is has high hopes for its professional GPU line-up moving forward.
This was not exactly news. At the time it was clear that AMD GPUs would end up in Cupertino’s latest Mac Pro series. The question was how much AMD stands to gain, both in terms of market share and revenue.
Although we are not fans of Apple’s marketing hype and hysteria associated with its consumerish fanboys, we have to admit that we have a soft spot new Mac Pro buckets. The bucket form factor is truly innovative and as usual the Mac Pro has the brains to match its looks. Basically it’s Apple going back to its roots.
Late last year it was reported that AMD would boost its market share in the professional segment to 30 percent this year, up from about 20 percent last year. For years Nvidia outsold AMD by a ratio of four to one in the professional space. The green team still has a huge lead, but AMD appears to be closing the gap.
It is hard to overstate the effect of professional graphics on Nvidia’s bottom line. The highly successful Quadro series always was and still is Nvidia’s cash cow. AMD is fighting back with competitive pricing and good hardware. In addition, the first Hawaii-based professional cards are rolling out as we speak. AMD’s new FirePro W9100, its first professional product based on Hawaii silicon, was announced a couple of weeks ago.
Can AMD keep it up?
2014 will be a good year for AMD’s professional graphics business, but it still remains to be seen whether the winning streak will continue. Apple does not care about loyalty, it’s not exactly a monogamous hardware partner. Apple has a habit of shifting between Nvidia and AMD graphics in the consumer space, so we would not rule out Nvidia in the long run. It might be back in future Mac Pro designs, but AMD has a few things working in its favour.
One of them is Adobe’s love of Open CL, which makes AMD’s professional offerings a bit more popular than Nvidia products in some circles. Adobe CC loves Open CL and AMD has been collaborating with Adobe for years to improve it. Support now extends to SpeedGrade CC, After Effects CC, Premiere, Adobe Media Encoder CC and other Adobe products.
Pricing is another important factor, as AMD has a tradition of undercutting Nvidia in the professional segment. When you happen to control 20 percent of the market in a duopoly, competitive pricing is a must.
Also, changing vendors in the professional arena is a bit trickier than swapping out a consumer graphics card or mobile GPU in a Macbook. This is perhaps AMD’s biggest advantage at the moment. Maintaining such design wins is quite a bit easier than winning them. AMD learned this lesson the hard way. Nvidia did not have to, at least not yet.
According to Seeking Alpha, demand for Mac Pro buckets is “crazy-high” and delivery times range from five to six weeks. Seeking Alpha goes on to conclude that AMD could make about $800,000,000 off a two-year Mac Pro design win, provided Apple sells 500,000 units over the next two years. At the moment it appears that Apple should have no trouble shipping half a million units, and then some.
If AMD manages to hold onto the Mac Pro deal, it stands to make a pretty penny over the next couple of years. However, if it also manages to seize more design wins in Apple consumer products, namely iMacs and Macbooks, AMD could make a small fortune on Cupertino deals alone.
Bear in mind that AMD’s revenue last year was $5.3 billion, so $800 million over the course of two years is a huge deal – even without consumer products in iMacs and Macbooks.
Microsoft has hardened its stance regarding classifying programs as adware and gave developers three months to conform with the new principles or risk having their programs blocked by the company’s security products.
The most important change in Microsoft’s policy is that adware programs will be blocked by default starting July 1. In the past such programs were allowed to run until users chose one of the recommended actions offered by the company’s security software.
Interestingly, Microsoft’s crackdown on adware comes as it introduces tools to make it easier for developers to incorporate advertising into Windows 8.1 and Windows Phone apps.
The company has re-evaluated its criteria for classifying applications as adware based on the principle that users should be able to choose and control what happens on their computers, according to Michael Johnson, a member of the Microsoft Malware Protection Center.
First of all, only programs that display ads promoting goods and services inside other programs — for example, browsers — will be evaluated as possible unwanted adware applications, Johnson said in a blog post. “If the program shows advertisements within its own borders it will not be assessed any further.”
In order to avoid being flagged as adware and blocked, programs whose revenue model includes advertising must only display ads or groups of ads that have an obvious close button. The ads must also clearly indicate the name of the program that generated them.
Recommended methods for closing the ad include an “X” or the word “close” in a corner; the program name can be specified through phrases like “Ads by …”, “… ads”, “Powered by …”, “This ad served by …”, or “This ad is from …”.
“Using abbreviations or company logos alone are not considered clear enough,” Johnson said. “Also, only using ‘Ads not by this site’ does not meet our criteria, because the user does not know which program created the ad.”
In addition to following these ad display guidelines, programs need to provide a standard uninstall method in the Windows control panel or the browser add-on management interface, if the program operates as a browser extension or toolbar. The corresponding uninstall entries must contain the same program names as displayed in the generated ads.
“We are very excited by all of these changes,” Johnson said. “We believe that it will make it easy for software developers to utilize advertising while at the same time empowering users to control their experience.”
Adware programs typically affect the Web browsing experience and have been a nuisance for years, primarily because their developers make it intentionally hard to completely remove all of their components or undo the changes made by these applications.
With Fire TV, Amazon has launched its first box to deliver games to the living room. The $99 Android-based hardware features a quad-core processor, a dedicated GPU and a separate gaming controller for $40. Moreover, Amazon will bring exclusive games to the Fire TV through its first-party team at Amazon Game Studios.
Similar to other microconsoles, the games on the digital store will be either free or quite cheap to purchase, which Amazon hopes will make it attractive to the masses. Amazon has an army of resources and while other microconsoles have failed to become mainstream, it would be foolish to doubt Amazon’s potential. Should dedicated console makers like Sony or Microsoft be concerned? The majority of the analysts GamesIndustry International spoke to didn’t think so.
Wedbush Securities’ Michael Pachter called the announcement a “nonevent,” saying Amazon “will not be a player.” DFC Intelligence’s David Cole agrees.
“Short term they don’t have a reason to be concerned but long term it could be an issue. The main focus of the box is streaming video. The issue is video is 1) a much bigger application than games and 2) much easier to do. It is clear games are at best currently a distant after thought for Amazon in terms of the Amazon box. The type of games they are looking at are more in the realm of tablet/mobile/casual products, which are really no substitute for what the dedicated consoles provide,” he said.
“So I think right now it is a rounding error in the game industry but that could change if Amazon decides it wants to make a big investment in the space. However, the reality is you really have to very directly target gamers and Amazon right now is only half-heartedly doing that.”
Indeed, hardcore gamers won’t be passing up the PlayStation 4 or Xbox One for an Amazon Fire TV anytime soon, said independent analyst Billy Pidgeon: “Hardcore games enthusiasts won’t be satisfied by this or any other inexpensive television-connected device. Still, Microsoft, Sony and Nintendo are increasingly competing for individual and family entertainment time with interactive entertainment, video and audio available in the home on multiple devices, including smartphones and tablets as well as multimedia boxes that connect to television sets.”
Pidgeon conceded that “as more devices can offer games and media, consoles’ appeal for the mass market (an important factor in mid-to-end cycle console adoption) is in steep decline.” He added that if anyone should be worried now, it should be Apple and Google.
“Apple and Google have been the main contenders for online media transactions, but Amazon has the motivation, the focus and the distribution to move Fire TV quickly into lead position. Apple has competition issues with media providers and Google is behind in online retail and user experience. Amazon’s entry into connected TV could energize the competition and speed household penetration,” he said.
Asif Khan, CFO of Virtue LLC, wasn’t wowed by the Fire TV announcement either. Even with exclusive games – and now Amazon has hired some heavy hitters in Clint Hocking and Kim Swift – he’s not convinced that Amazon can disrupt the console market.
“We knew that Amazon was going to enter the games industry, but I am not sure who is going to feel compelled to buy it with a controller that costs 40 percent of the device. The success of the device as a gaming alternative will likely depend on the software that Amazon’s gaming studio can create, but we have seen with Nintendo’s Wii U flop that first-party content is not enough to get consumers to buy a device,” Khan noted.
“There is chance that Fire TV can make some waves if Amazon’s partners continue to bring games to the device, but in my opinion this product will achieve limited success,” he continued. “It feels like all of Apple’s competitors have now shown their cards in anticipation of the upcoming Apple TV refresh. We have seen Xbox One, Chromecast, and now Fire TV. None of these products have wowed consumers and ushered in a new age of how we interact with TVs. This announcement by Amazon today just has me even more interested in what Apple is going to announce this year. Clearly the set top box market has a lot of players and Amazon has a chance to contribute something to that increasingly crowded space. With that being said, I do not think the Fire TV is a game changer for video game consoles. It is a set top box that also plays games, with the potential of asymmetric gameplay.”
If the analysts seem overly negative, perhaps they are forgetting about Amazon’s web services. The back-end technology could make a difference, said IDC research manager Lewis Ward, who believes Amazon is “absolutely” a contender in the console space.
“Anybody in high tech or in content that sees Amazon jump into their bread and butter market and isn’t concerned about what Amazon might be able to do should have their head examined,” he commented. “Let’s put it this way: Fire TV is by far the most viable microconsole platform out there. Couple that with Amazon’s back-end streaming, storage, and game-hosting platforms and developer tools and you’ve got a serious threat to casual home-based console gaming in particular, at least in North America and pockets of Europe in the next few years.”
The personal data gathering abilities of Google,Facebook and other technology giants has sparked growing unease among Americans, with a majority worried that Internet companies are encroaching too much upon their lives, a new poll showed.
Google and Facebook generally topped lists of Americans’ concerns about the ability to track physical locations and monitor spending habits and personal communications, according to a poll conducted by Reuters/Ipsos from March 11 to March 26.
The survey highlights a growing ambivalence towards Internet companies whose popular online services, such as social networking, e-commerce and search, have blossomed into some of the world’s largest businesses.
Now, as the boundaries between Web products and real world services begin to blur, many of the top Internet companies are racing to put their stamp on everything from homeappliances to drones and automobiles.
With billions of dollars in cash, high stock prices, and an appetite for more user data, Google, Facebook, Amazon and others are acquiring a diverse set of companies and launching ambitious technology projects.
But their grand ambitions are inciting concern, according to the poll of nearly 5,000 Americans. Of 4,781 respondents, 51 percent replied “yes” when asked if those three companies, plus Apple, Microsoft and Twitter, were pushing too far and expanding into too many areas of people’s lives.
This poll measures accuracy using a credibility interval and is accurate to plus or minus 1.6 percentage points.
“It’s very accurate to say that many people have love-hate relationships with some of their technology providers,” said Nuala O’Connor, the President of the Center for Democracy and Technology, an Internet public policy group which has received funding from companies including Google, Amazon and Microsoft.
“As technology moves forward, as new technologies are in use and in people’s lives, they should question ‘Is this a fair deal between me and the device?’”
Fears about the expanding abilities of tech companies crystallized when Google acknowledged in 2010 that its fleet of StreetView cars, which criss-cross the globe taking panoramic photos for Google’s online mapping service, had inadvertently collected emails and other personal information transmitted over unencrypted home wireless networks.
Yet many Americans remain ignorant of the extent to which Internet companies are trying to extend their reach.
Google is one of the most aggressively ambitious, investing in the connected home through its $3.2 billion acquisition of smart thermostat maker Nest. Google is also investing in self-driving cars, augmented-reality glasses, robots and drones.
Almost a third of Americans say they know nothing about plans by Google and its rivals to get into real-world products such as phones, cars and appliances. Still, roughly two thirds of respondents are already worried about what Internet companies will do with the personal information they collect, or how securely they store the data.
Oracle has announced the availability of the latest edition of its NoSQL datatabase.
NoSQL is Oracle’s distributed key-value database. Now in it’s third version, the enhancements this time are heavily centred around security and business continuity.
Oracle NoSQL 3.0 features improvements in security with cluster-wide password based user authentication and integration with Oracle Wallet. Session level Secure Socket Layer (SSL) encryption and network port restriction are also included.
For disaster recovery and prevention, there’s automatic fail-over to metro-area secondary data centres, while secondary server zones can be used to offload read-only workloads to take the pressure off primary servers under stress.
For developers, there is added support for tabular data models that Oracle claims will simplify application design and improve integration with SQL based applications, while secondary indexing improves query performance.
“Oracle NoSQL 3.0 helps organisations fill the gap in skills, security and performance by delivering [...] enterprise-class NoSQL database that empowers database developers and DBAs to easily, intuitively and securely build and deploy next generation applications,” said Oracle’s EVP of Database Server Technologies, Andrew Mendelsohn.
It’s already been a big week for the SQL community with NoSQL arriving on MariaDB for the first time, courtesy of a tie-up between SkySQL, Google and IBM on Tuesday, while yesterday Fusion-IO announced the use of Non-volatile memory (NVM) compression in MySQL to increase the capacity of SSD storage.
Both the community and enterprise versions of Oracle NoSQL Database 3.0 are available for download now from the Oracle Technology Network.