Hacking for espionage purposes is drastically rising, with groups or national governments from Eastern Europe playing a growing role, according to one of the most comprehensive annual studies of computer intrusions.
Spying intrusions traced back to any country in 2013 were blamed on residents of China and other East Asian nations 49 percent of the time, but Eastern European countries, especially Russian-speaking nations, were the suspected launching site for 21 percent of breaches, Verizon Communications Inc’s said in its annual Data Breach Investigations Report.
Those were by far the most active areas detected in the sampling, which drew more than half of its data from victims in the United States. About 25 percent of spying incidents could not be attributed to attackers from any country, according to the authors of the report.
Though the overall number of spying incidents studied tripled to 511 from total in the 2013 Verizon report, most of that increase is due to the addition of new data sources. Even looking at just the same contributors as before, however, espionage cases grew, said Verizon investigator Bryan Sartin.
Not all electronic spying was blamed on governments. Investigators from Verizon, Intel Corp’s McAfee, Kaspersky Labs and other private companies and public agencies contributing data ascribed 11 percent of espionage attacks to organized criminals and 87 percent to governments.
In some cases, the criminal gangs were probably looking to sell what they found to governments or competitors of the victims.
“We do see a slight merging between the classic organized criminal and the espionage crook,” Sartin said, adding that he expected that trend to continue.
If the rise of detected Eastern European spying comes as a surprise to those mainly familiar with accusations against China, a bigger surprise might be the study’s findings about attacks on retailers.
Though recent breaches at Target Corp and other retailers through their point-of-sale equipment have dominated the headlines and prompted congressional hearings in the past few months, fewer such intrusions have been reported to the Verizon team than in past years, even as the number of report contributors has multiplied.
“The media frenzy makes quite a splash, but from a frequency standpoint, this largely remains a small-and-medium business issue,” the study says.
Alibaba’s Tmall and Taobao sites already sell everything from clothes and furniture to car tires and medicines. But soon they’ll also be offering 3G data and voice call plans as well, according to the Chinese tech giant.
User registration for mobile phone numbers will begin in May.
Alibaba is among the Chinese companies that received a mobile virtual network operators license back in December. This allows them to resell wireless services from the nation’s state-controlled mobile carriers China Mobile, China Unicom, and China Telecom.
It won’t be hard for the Alibaba to find customers. Taobao and Tmall are two of China’s largest online retail sites. In addition, the company is aggressively expanding into mobile services, developing its own operating system for smartphones, along with a mobile chatting app called Laiwang.
As smartphones become the number one way Chinese go online, local tech companies are trying to corner a part of the mobile Internet market. In Alibaba’s case, the company has been on a spending spree, buying a stake in Chinese social networking site, Weibo.com, and moving to acquire the country’s largest online mapping provider.
Offering data and voice services could help Alibaba attract more users to its e-commerce services. As China only has three mobile carriers, there’s plenty of room for MVNOs to grow, according to analysts. But Alibaba won’t be the only e-commerce company offering mobile phone services.
JD.com, another major online retailer in China, has also received a MVNO license. The company plans to offer its telecom services in the second quarter of this year.
JD.com has the second largest business-to-consumer retail site behind Tmall.com, according to research firm Analysys International. The company is set to grow even faster after Chinese Internet giant Tencent bought a 15 percent stake in it.
As part of the deal, JD.com will take over two of Tencent’s online retail businesses. It will also gain access to Tencent’s WeChat app, a mobile messaging app with 300 million users.
Square Inc has been having discussions with several rivals for a possible sale as the mobile payments startup hopes to stem widening losses and dwindling cash, the Wall Street Journal reported, citing people familiar with the matter.
The company spoke to Google Inc earlier this year about a possible sale, the Journal reported, adding that it wasn’t clear whether the talks are continuing.
Square, founded in 2009 by Jack Dorsey, co-creator of Twitter Inc, will likely fetch billions of dollars in a sale. Square insiders sold shares earlier this year on the secondary market, valuing the company at roughly $5.2 billion, the Journal said.
The company recorded a loss of about $100 million in 2013, the Journal said, adding that the startup has consumed more than half of the roughly $340 million it raised from at least four rounds of equity financing since 2009.
Square makes credit card readers that slot into smartphones such as Apple Inc’s iPhone.
Square also had informal discussions about a deal with Apple
and eBay Inc’s PayPal in the past, but those conversations never developed into serious talks, the Journal said.
A spokesman for Square told the Journal that the company never had acquisition talks with Google. The report also quoted a PayPal spokesman as saying that the company did not have acquisition talks with Square.
Square, Google, Apple and eBay were not immediately available for comment.
AMD posted some rather encouraging Q1 numbers last night, but slow PC sales are still hurting the company, along with the rest of the sector.
When asked about the PC market slump, AMD CEO Rory Read confirmed that the PC market was down sequentially 7 percent. This was a bit better than the company predicted, as the original forecast was that the PC market would decline 7 to 10 percent.
Rory pointed out that AMD can grow in the PC market as there is a lot of ground that can be taken from the competition. The commercial market did better than expected and Rory claims that AMD’s diversification strategy is taking off. AMD is trying to win market share in desktop and commercial segments, hence AMD sees an opportunity to grown PC revenue in the coming quarters. Rory also expects that tablets will continue to cannibalize the PC market. This is not going to change soon.
Kaveri and Kabini will definitely help this effort as both are solid parts priced quite aggressively. Kabini is also available in AMD’s new AM1 platform and we believe it is an interesting concept with plenty of mass market potential. Desktop and Notebook ASPs are flat which is something that the financial community really appreciated. It would not be so unusual that average selling prices were down since the global PC market was down.
Kaveri did well in the desktop high-end market in Q1 2014 and there will be some interesting announcements in the mobile market in Q2 2014 and beyond.
GlobalFoundries should be rolling out 20nm chips later this year and we hope that some AMD 20nm products might actually launch this year. The foundry failed to conquer the world with its 28nm process, but after some delays it got sorted out the problems and managed to ship some high-volume parts based on this process.
GlobalFoundries is manufacturing AMD’s new Kaveri APUs, while TSMC is making the Jaguar-based 28nm parts. We are not sure who is making the new server parts such as Seattle or Berlin, both 28nm designs. It is expected that GlobalFoundries should commence volume production of some 20nm parts later this year and the company has big plans for a faster transition to 14nm.
GlobalFoundries cozying up to Samsung
It is no secret that Intel leads the way in new process transitions and that Intel plans to ship 14nm parts at the time TSMC and GlobalFoundries are struggling to ship their first 20nm parts.
GlobalFoundries has now announced that it will start a strategic collaboration with none other than Samsung for its 14nm transition. It is easy to see that these two big players need each other in order to fight against bigger competitors like Intel and TSMC. GlobalFoundries and Samsung don’t have much overlap, either.
This joint venture will result in faster time-to-market for 14nm FinFET-based products. We see at least two advantages. According to Ana Hunter, Vice President of Product Management at GlobalFoundries, the process design kits are available today and the the foundry should be ready to manufacture 14nm FinFET products by the end of 2014. This sounds a bit optimistic, as we heard these bold announcements before, especially as both companies didn’t really start shipping 20nm parts yet, at least not in high volume high performance parts. It should be noted that Samsung joined the 28nm club quite late and shipped its first 28nm SoC just a year ago, in the Galaxy S4.
Sawn Han, Vice president of foundry marketing at Samsung Electronics, calls this partnership a ‘game changer’ as it will enable 14nm production by a total of four foundries in the world, three from Samsung and one from GlobalFoundries. Samsung will offer 14nm FinFET from S2 Fab in Austin Texas, S3 Fab in Hwa Seong in South Korea and S1 Fab in Gi Heung South Korea. GlobalFoundries is preparing its Fab 8 in Saratoga, New York State, for the 14nm push.
14nm FinFET crucial for next-gen SoC designs
The companies say 14nm FinFET technology features a smaller contact gate pitch for higher logic packing density and smaller SRAM bitcells to meet the increasing demand for memory content in advanced SoCs, while still leveraging the proven interconnect scheme from 20nm to offer the benefits of FinFET technology with reduced risk and the fastest time-to-market.
The 14nm LPE should deliver 20 percent more performance compared to 20nm parts and the power required should sink 35 percent versus 20nm LPE parts. Compared to 20nm parts it will save 15 percent of die space as well making it possible to cram more components into the same die size.
We have yet to see the first mobile 20nm parts in actual products. Qualcomm announced its first Snapdragons based on the new process a few weeks ago, but they won’t be ready for months. You can expect that a SoC manufactured on 14nm could end up 40 to 50 percent faster than its 28nm predecessor and that the power requirement could go down by 50 to 70 percent at best.
The total market for mobility, wireless and computer network storage market is expected to hit around $20 billion by 2017. Of course, everyone wants the piece of that action. The joint venture will offer both 14nm LPE (Low Power Enhanced) and 14nm LPP (Laser-Produced Plasma) process.
All we need now are the design wins from high-volume customers and if we were to bet we would place our money on Samsung, namely its Exynos processors. We would be positively surprised to see 14nm SoC in mobile phones and tablets in 2015, but it is a possibility. Keep in mind that we are still waiting to see the first 20nm SoCs and GPUs in action.
Researchers are looking at the possibility of making low-power, flexible and inexpensive computers out of plastic materials. Plastic is not normally a good conductive material. However, researchers said this week that they have solved a problem related to reading data.
The research, which involved converting electricity from magnetic film to optics so data could be read through plastic material, was conducted by researchers at the University of Iowa and New York University. A paper on the research was published in this week’s Nature Communications journal.
More research is needed before plastic computers become practical, acknowledged Michael Flatte, professor of physics and astronomy at the University of Iowa. Problems related to writing and processing data need to be solved before plastic computers can be commercially viable.
Plastic computers, however, could conceivably be used in smartphones, sensors, wearable products, small electronics or solar cells, Flatte said.
The computers would have basic processing, data gathering and transmission capabilities but won’t replace silicon used in the fastest computers today. However, the plastic material could be cheaper to produce as it wouldn’t require silicon fab plants, and possibly could supplement faster silicon components in mobile devices or sensors.
“The initial types of inexpensive computers envisioned are things like RFID, but with much more computing power and information storage, or distributed sensors,” Flatte said. One such implementation might be a large agricultural field with independent temperature sensors made from these devices, distributed at hundreds of places around the field, he said.
The research breakthrough this week is an important step in giving plastic computers the sensor-like ability to store data, locally process the information and report data back to a central computer.
Mobile phones, which demand more computing power than sensors, will require more advances because communication requires microwave emissions usually produced by higher-speed transistors than have been made with plastic.
It’s difficult for plastic to compete in the electronics area because silicon is such an effective technology, Flatte acknowledged. But there are applications where the flexibility of plastic could be advantageous, he said, raising the possibility of plastic computers being information processors in refrigerators or other common home electronics.
“This won’t be faster or smaller, but it will be cheaper and lower power, we hope,” Flatte said.
The Openstack-based service also includes an extension of the Red Hat partnership into the Dell Openshift Platform as a Service (PaaS) and Linux Container products.
Dell and Redhat said their cloud partnership is intended to “address enterprise customer demand for more flexible, elastic and dynamic IT services to support and host non-business critical applications”.
The integration of Openshift with Redhat Linux is a move towards container enhancements from Redhat’s Docker platform, which the companies said will enable a write-once culture, making programs portable across public, private and hybrid cloud environments.
Paul Cormier, president of Products and Technologies at Red Hat said, “Cloud innovation is happening first in open source, and what we’re seeing from global customers is growing demand for open hybrid cloud solutions that meet a wide variety of requirements.”
Sam Greenblatt, VP of Enterprise Solutions Group Technology Strategy at Dell, added, “Dell is a long-time supporter of Openstack and this important extension of our commitment to the community now will include work for Openshift and Docker. We are building on our long history with open source and will apply that expertise to our new cloud solutions and co-engineering work with Red Hat.”
Dell Red Hat Cloud Solutions are available from today, with support for platform architects available from Dell Cloud Services.
Earlier this week, Red Hat announced Atomic Host, a new fork of Red Hat Enterprise Linux (RHEL) specifically tailored for containers. Last year, the company broke bad with its Fedora Linux distribution, codenamed Heisenbug.
Oracle issued a comprehensive list of its software that may or may not be impacted by the OpenSSL (secure sockets layer) vulnerability known as Heartbleed, while warning that no fixes are yet available for some likely affected products.
The list includes well over 100 products that appear to be in the clear, either because they never used the version of OpenSSL reported to be vulnerable to Heartbleed, or because they don’t use OpenSSL at all.
However, Oracle is still investigating whether another roughly 20 products, including MySQL Connector/C++, Oracle SOA Suite and Nimbula Director, are vulnerable.
Oracle determined that seven products are vulnerable and is offering fixes. These include Communications Operation Monitor, MySQL Enterprise Monitor, MySQL Enterprise Server 5.6, Oracle Communications Session Monitor, Oracle Linux 6, Oracle Mobile Security Suite and some Solaris 11.2 implementations.
Another 14 products are likely to be vulnerable, but Oracle doesn’t have fixes for them yet, according to the post. These include BlueKai, Java ME and MySQL Workbench.
Users of Oracle’s growing family of cloud services may also be able to breath easy. “It appears that both externally and internally (private) accessible applications hosted in Oracle Cloud Data Centers are currently not at risk from this vulnerability,” although Oracle continues to investigate, according to the post.
Heartbleed, which was revealed by researchers last week, can allow attackers who exploit it to steal information on systems thought to be protected by OpenSSL encryption. A fix for the vulnerable version of OpenSSL has been released and vendors and IT organizations are scrambling to patch their products and systems.
Observers consider Heartbleed one of the most serious Internet security vulnerabilities in recent times.
Meanwhile, this week Oracle also shipped 104 patches as part of its regular quarterly release.
The patch batch includes security fixes for Oracle database 11g and 12c, Fusion Middleware 11g and 12c, Fusion Applications, WebLogic Server and dozens of other products. Some 37 patches target Java SE alone.
A detailed rundown of the vulnerabilities’ relative severity has been posted to an official Oracle blog.
“I think you’ll see wide-area, high-bandwidth [smart]watches this year at some point,” said Glenn Lurie, president of emerging devices at AT&T, in an interview.
The company has a group working in Austin, Texas, on thousands of wearable-device prototypes, and is also looking at certifying third-party devices for use on its network, Lurie said.
“A majority of stuff you’re going to see today that’s truly wearable is going to be in a watch form factor to start,” Lurie said. If smartwatch use takes off — “and we believe it can,” Lurie said — then those devices could become hubs for wearable computing.
Right now smartwatches lack LTE capabilities, so they are largely reliant on smartphones for apps and notifications. With a mobile broadband connection, a smartwatch becomes an “independent device,” Lurie said.
“We’ve been very, very clear in our opinion that a wearable needs to be a stand-alone device,” Lurie said.
AT&T and Filip Technologies in January released the Filip child tracker wristwatch, which also allows a parent to call a child over AT&T’s network. Filip could be improved, but those are the kind of wearable products that AT&T wants to bring to market.
Wearables for home health care are also candidates for LTE connections, Lurie said, but fitness trackers may be too small for LTE connectivity, at least for now.
Lurie couldn’t say when smartglasses would be certified to work on AT&T’s network. Google last year said adding cellular capabilities to its Glass eyewear wasn’t in the plans because of battery use. But AT&T is willing to experiment with devices to see where LTE would fit.
“It’s one thing if I’m buying it to go out for a job, it’s another thing if I’m going to wear it everyday. Those are the things people are debating right now — how that’s all going to come out,” Lurie said. “There’s technology and there’s innovation happening, and those things will get solved.”
Lurie said battery issues are being resolved, but there are no network capacity issues. Wearable devices don’t use too much bandwidth as they relay short bursts of information, unless someone is, for instance, listening to Pandora radio on a smartwatch, Lurie said.
But AT&T is building out network capacity, adding Wi-Fi networks, and virtualizing networks to accommodate more devices.
“We don’t have network issues, we don’t have any capacity issues,” Lurie said. “The key element to adding these devices is a majority of [them] aren’t high-bandwidth devices.”
AT&T wants to make wearables work with its home offerings like the Digital Life home automation and security system. AT&T is also working with car makers for LTE integration, with wearables interacting with vehicles to open doors and start ignitions.
MediaTek has shown off one of its most interesting SoC designs to date at the China Electronic Information Expo. The MT6595 was announced a while ago, but this is apparently the first time MediaTek showcased it in action.
It is a big.LITTLE octa-core with integrated LTE support. It has four Cortex A17 cores backed by four Cortex A7 cores and it can hit 2.2GHz. The GPU of choice is the PowerVR G6200. It supports 2K4K video playback and recording, as well as H.265. It can deal with a 20-megapixel camera, too.
The really interesting bit is the modem. It can handle TD-LTE/FDD-LTE/WCDMA/TD-SCDMA/GSM networks, hence the company claims it is the first octa-core with on board LTE. Qualcomm has already announced an LTE-enabled octa-core, but it won’t be ready anytime soon. The MT6595 will – it is expected to show up in actual devices very soon.
Of course, MediaTek is going after a different market. Qualcomm is building the meanest possible chip with four 64-bit Cortex A57 cores and four A53 cores, while MediaTek is keeping the MT6595 somewhat simpler, with smaller 32-bit cores.
The revisions more explicitly spell out the manner in which Google software scans users’ emails, both when messages are stored on Google’s servers and when they are in transit, a controversial practice that has been at the heart of litigation.
Last month, a U.S. judge decided not to combine several lawsuits that accused Google of violating the privacy rights of hundreds of millions of email users into a single class action.
Users of Google’s Gmail email service have accused the company of violating federal and state privacy and wiretapping laws by scanning their messages so it could compile secret profiles and target advertising. Google has argued that users implicitly consented to its activity, recognizing it as part of the email delivery process.
Google spokesman Matt Kallman said in a statement that the changes “will give people even greater clarity and are based on feedback we’ve received over the last few months.”
Google’s updated terms of service added a paragraph stating that “our automated systems analyze your content (including emails) to provide you personally relevant product features, such as customized search results, tailored advertising, and spam and malware detection. This analysis occurs as the content is sent, received, and when it is stored.
Mark Karpeles, the founder of Mt. Gox, has refused to come to the United States to answer questions about the Japanese bitcoin exchange’s U.S. bankruptcy case, Mt. Gox lawyers told a federal judge on Monday.
In the court filing, Mt. Gox lawyers cited a subpoena from the U.S. Department of Treasury’s Financial Crimes Enforcement Network, which has closely monitored virtualcurrencies like bitcoin.
“Mr. Karpeles is now in the process of obtaining counsel to represent him with respect to the FinCEN Subpoena. Until such time as counsel is retained and has an opportunity to ‘get up to speed’ and advise Mr. Karpeles, he is not willing to travel to the U.S.”, the filing said.
The subpoena requires Karpeles to appear and provide testimony in Washington, D.C., on Friday.
The court papers also said a Japanese court had been informed of the issue and that a hearing was scheduled on Tuesday in Japan.
Bitcoin is a digital currency that, unlike conventional money, is bought and sold on a peer-to-peer network independent of central control. Its value has soared in the last year, and the total worth of bit coins minted is now about $7 billion.
Mt. Gox, once the world’s biggest bitcoin exchange, filed for bankruptcy protection in Japan last month, saying it may have lost nearly half a billion dollars worth of the virtual coins due to hacking into its computer system.
According to Monday’s court filings, the subpoena did not specify topics for discussion.
In the court filings, Karpelès’ lawyers asked the court to delay the bankruptcy deposition to May 5, 2014 but said that Mt. Gox could not guarantee that Karpeles would attend that either.
Researchers last week warned they uncovered Heartbleed, a bug that targets the OpenSSL software commonly used to keep data secure, potentially allowing hackers to steal massive troves of information without leaving a trace.
Security experts initially told companies to focus on securing vulnerable websites, but have since warned about threats to technology used in data centers and on mobile devices running Google Inc’s Android software and Apple Inc’s iOS software.
Scott Totzke, BlackBerry senior vice president, told Reuters on Sunday that while the bulk of BlackBerry products do not use the vulnerable software, the company does need to update two widely used products: Secure Work Space corporate email and BBM messaging program for Android and iOS.
He said they are vulnerable to attacks by hackers if they gain access to those apps through either WiFi connections or carrier networks.
Still, he said, “The level of risk here is extremely small,” because BlackBerry’s security technology would make it difficult for a hacker to succeed in gaining data through an attack.
“It’s a very complex attack that has to be timed in a very small window,” he said, adding that it was safe to continue using those apps before an update is issued.
Google spokesman Christopher Katsaros declined comment. Officials with Apple could not be reached.
Security experts say that other mobile apps are also likely vulnerable because they use OpenSSL code.
Michael Shaulov, chief executive of Lacoon Mobile Security, said he suspects that apps that compete with BlackBerry in an area known as mobile device management are also susceptible to attack because they, too, typically use OpenSSL code.
He said mobile app developers have time to figure out which products are vulnerable and fix them.
“It will take the hackers a couple of weeks or even a month to move from ‘proof of concept’ to being able to exploit devices,” said Shaulov.
Technology firms and the U.S. government are taking the threat extremely seriously. Federal officials warned banks and other businesses on Friday to be on alert for hackers seeking to steal data exposed by the Heartbleed bug.
Companies including Cisco Systems Inc, Hewlett-Packard Co, International Business Machines Corp, Intel Corp, Juniper Networks Inc, Oracle Corp Red Hat Inc have warned customers they may be at risk. Some updates are out, while others, like BlackBerry, are rushing to get them ready.
With Amazon’s Fire TV device the first out the door, the second wave of microconsoles has just kicked off. Amazon’s device will be joined in reasonably short order by one from Google, with an app-capable update of the Apple TV device also likely in the works. Who else will join the party is unclear; Sony’s Vita TV, quietly soft-launched in Japan last year, remains a potentially fascinating contender if it had the right messaging and services behind it, but for now it’s out of the race. One thing seems certain, though; at least this time we’re actually going to have a party.
“Second wave”, you see, rather implies the existence of a first wave of microconsoles, but last time out the party was disappointing, to say the least. In fact, if you missed the first wave, don’t feel too bad; you’re in good company. Despite enthusiasm, Kickstarter dollars and lofty predictions, the first wave of microconsole devices tanked. Ouya, Gamestick and their ilk just turned out to be something few people actually wanted or needed. Somewhat dodgy controllers and weak selections of a sub-set of Android’s game library merely compounded the basic problem – they weren’t sufficiently cheap or appealing compared to the consoles reaching their end-of-life and armed with a vast back catalogue of excellent, cheap AAA software.
“The second wave microconsoles will enjoy all the advantages their predecessors did not. They’ll be backed by significant money, marketing and development effort, and will have a major presence at retail”
That was always the reality which deflated the most puffed-up “microconsoles will kill consoles” argument; the last wave of microconsoles sucked compared to consoles, not just for the core AAA gamer but for just about everyone else as well. Their hardware was poor, their controllers uncomfortable, their software libraries anaemic and their much-vaunted cost savings resulting from mobile game pricing rather than console game pricing tended to ignore the actual behaviour of non-core console gamers – who rarely buy day-one software and as a result get remarkably good value for money from their console gaming experiences. Comparing mobile game pricing or F2P models to $60 console games is a pretty dishonest exercise if you know perfectly well that most of the consumers you’re targeting wouldn’t dream of spending $60 on a console game, and never have to.
Why is the second wave of microconsoles going to be different? Three words: Amazon, Google, Apple. Perhaps Sony; perhaps even Samsung or Microsoft, if the wind blows the right direction for those firms (a Samsung microconsole, sold separately and also bundled into the firm’s TVs, as Sony will probably do with Vita TV in future Bravia televisions, would make particular sense). Every major player in the tech industry has a keen interest in controlling the channel through which media is consumed in the living room. Just as Sony and Microsoft originally entered the games business with a “trojan horse” strategy for controlling living rooms, Amazon and Google now recognise games as being a useful way to pursue the same objective. Thus, unlike the plucky but poorly conceived efforts of the small companies who launched the first wave of microconsoles, the second wave is backed by the most powerful tech giants in the world, whose titanic struggle with each other for control of the means of media distribution means their devices will have enormous backing.
To that end, Amazon has created its own game studios, focusing their efforts on the elusive mid-range between casual mobile games and core console games. Other microconsole vendors may take a different approach, creating schemes to appeal to third-party developers rather than building in-house studios (Apple, at least, is almost guaranteed to go down this path; Google could yet surprise us by pursuing in-house development for key exclusive titles). Either way, the investment in software will come. The second wave of microconsoles will not be “boxes that let you play phone games on your TV”; at least not entirely. Rather, they will enjoy dedicated software support from companies who understand that a hit exclusive game would be a powerful way to drive installed base and usage.
Moreover, this wave of microconsoles will enjoy significant retail support. Fire TV’s edge is obvious; Amazon is the world’s largest and most successful online retailer, and it will give Fire TV prime billing on its various sites. The power of being promoted strongly by Amazon is not to be underestimated. Kindle Fire devices may still be eclipsed by the astonishing strength of the iPad in the tablet market, but they’re effectively the only non-iPad devices in the running, in sales terms, largely because Amazon has thrown its weight as a retailer behind them. Apple, meanwhile, is no laggard at retail, operating a network of the world’s most profitable stores to sell its own goods, while Google, although the runt of the litter in this regard, has done a solid job of balancing direct sales of its Nexus handsets with carrier and retail sales, work which it could bring to bear effectively on a microconsole offering.
In short, the second wave microconsoles will enjoy all the advantages their predecessors did not. They’ll be backed by significant money, marketing and development effort, and will have a major presence at retail. Moreover, they’ll be “trojan horse” devices in more ways than one, since their primary purpose will be as media devices, streaming content from Amazon, Google Play, iTunes, Hulu, Netflix and so on, while also serving as solid gaming devices in their own right. Here, then, is the convergence that microconsole advocates (and the rather less credible advocates of Smart TV) have been predicting all along; a tiny box that will stream all your media off the network and also build in enough gaming capability to satisfy the mainstream of consumers. Between the microconsole under the TV and the phone in your pocket, that’s gaming all sewn up, they reckon; just as a smartphone camera is good enough for almost everyone, leaving digital SLRs and their ilk to the devoted hobbyist, the professional and the poseur, a microconsole and a smartphone will be more than enough gaming for almost everyone, leaving dedicated consoles and gaming PCs to a commercially irrelevant hardcore fringe.
There are, I think, two problems with that assessment. The first is the notion that the “hardcore fringe” who will use dedicated gaming hardware is small enough to be commercially irrelevant; I’ve pointed out before that the strong growth of a new casual gaming market does not have to come at the cost of growth in the core market, and may even support it by providing a new stream of interested consumers. This is not a zero-sum game, and will not be a zero-sum game until we reach a point where there are no more non-gaming consumers out there to introduce to our medium. Microconsoles might do very well and still cause not the slightest headache to PlayStation, Xbox or Steam.
The second problem with the assessment is a problem with the microconsoles themselves – a problem which the Fire TV suffers from very seriously, and which will likely be replicated by subsequent devices. The problem is control.
Games are an interactive experience. Having a box which can run graphically intensive games is only one side of the equation – it is, arguably, the less important side of the equation. The other side is the controller, the device through which the player interacts with the game world. The most powerful graphics hardware in the world would be meaningless without some enjoyable, comfortable, well-designed method of interaction for players; and out of the box, Fire TV doesn’t have that.
Sure, you can control games (some of them, anyway) with the default remote control, but that’s going to be a terrible experience. I’m reminded of terribly earnest people ten years ago trying to convince me that you could have fun controlling complex games on pre-smartphone phones, or on TV remote controls linked up to cable boxes; valiant efforts ultimately doomed not only by a non-existent business ecosystem but by a terrible, terrible user experience. Smartphones heralded a gaming revolution not just because of the App Store ecosystem, but because it turned out that a sensitive multi-touch screen isn’t a bad way of controlling quite a lot of games. It still doesn’t work for many types of game; a lot of traditional game genres are designed around control mechanisms that simply can’t be shoehorned onto a smartphone. By and large, though, developers have come to grips with the possibilities and limitations of the touchscreen as a controller, and are making some solid, fun experiences with it.
With Fire TV, and I expect with whatever offering Google and Apple end up making, the controller is an afterthought – both figuratively and literally. You have to buy it separately, which keeps down the cost of the basic box but makes it highly unlikely that the average purchaser will be able to have a good game experience on the device. The controller itself doesn’t look great, which doesn’t help much, but simply being bundled with the box would make a bold statement about Fire TV’s gaming ambitions. As it is, this is not a gaming device. It’s a device that can play games if you buy an add-on; the notion that a box is a “gaming device” just because its internal chips can process game software, even if it doesn’t have the external hardware required to adequately control the experience, is the kind of notion only held by people who don’t play or understand games.
This is the Achilles’ Heel of the second generation of microconsoles. They offer a great deal – the backing of the tech giants, potentially huge investment and enormous retail presence. They could, with the right wind in their sales, help to bring “sofa gaming” to the same immense, casual audience that presently enjoys “pocket gaming”. Yet the giant unsolved question remains; how will these games be controlled? A Fire TV owner, a potential casual gamer, who tries to play a game using his remote control and finds the experience frustrating and unpleasant won’t go off and buy a controller to make things better; he’ll shrug and return to the Hulu app, dismissing the Games panel of the device as being a pointless irrelevance.
The answer doesn’t have to be “bundle a joypad”. Perhaps it’ll be “tether to a smartphone”, a decision which would demand a whole new approach to interaction design (which would be rather exciting, actually). Perhaps a simple Wiimote style wand could double as a remote control and a great motion controller or pointer. Perhaps (though I acknowledge this as deeply unlikely) a motion sensor like a “Kinect Lite” could be the solution. Many compelling approaches exist which deserve to be tried out; but one thing is absolutely certain. While the second generation of microconsoles are going to do very well in sales terms, they will primarily be bought as media streaming boxes – and will never be an important games platform until the question of control gets a good answer.
For a trial that centers on smartphones and the technology they use, it’s more than a little ironic. The entire case might not even be taking place if the market wasn’t so big and important, but the constant need for connectivity of everyone is causing problems in the court, hence the new sign.
The problems have centered on the system that displays the court reporter’s real-time transcription onto monitors on the desks of Judge Lucy Koh, the presiding judge in the case, and the lawyers of Apple and Samsung. The system, it seems, is connected via Wi-Fi and that connection keeps failing.
“We have a problem,” Judge Koh told the courtroom on April 4, soon after the problem first appeared. Without the system, Koh said she couldn’t do her job, so if people didn’t shut off electronics, she might have to ban them from the courtroom.
In many other courts, electronic devices are routinely banned, but the Northern District of California and Judge Koh have embraced technology more than most. While reporters and spectators are limited to a pen and paper in courts across the country, the court here permits live coverage through laptops and even provides a free Wi-Fi network.
On Monday, the problems continued and Judge Koh again asked for all cellphones to be switched off.
But not everyone listened. A scan of the courtroom revealed at least one hotspot hadn’t been switched off: It was an SK Telecom roaming device from South Korea, likely used by a member of Samsung’s team.
The hotspot was switched off by the end of the day, but on Tuesday there were more problems.
“You. Ma’am. You in the front row,” Judge Koh said sternly during a break. She’d spotted an Apple staffer using her phone and made the culprit stand, give her name and verbally agree not to use the handset again in court.
As a result of all the problems, lawyers for Apple and Samsung jointly suggested using a scheduled two-day break in the case to hardwire the transcription computers to the court’s network.
The cable wasn’t installed.
“I believe there were some issues, We’re attempting to install it,” one of the attorneys told IDG News Service during the court lunch break.
So for now, the problems continue.
The clerk opened the day with an appeal to switch phones off, “not even airplane mode.”
That still didn’t help.
The transcription screens failed at 9:09 a.m., just minutes into the first session of the morning.