Outlook.com, which replaced Hotmail and offers a similar feature for chatting with people on Facebook and Skype, will roll out this Gmail capability over the next few days to its 400 million users worldwide, according to Microsoft.
People will also be able to engage in IM chats with Gmail users from the interface of their SkyDrive cloud storage and file sharing application.
“With this feature, the next time you’re reading an email from someone who uses Gmail, you can reply with a quick chat right from your Outlook.com inbox. And if you’re working together on an Office document in SkyDrive, you can send an instant message to a Google contact with just a click,” wrote Microsoft official Douglas Pearce in a blog post on Tuesday.
Pearce also took a dig at Gmail in the blog post, saying that the new feature is “one more reason to make the switch” and that part of the motivation was to help Outlook.com users chat “with friends stuck on Gmail.”
Microsoft launched a preview of Outlook.com with much fanfare in July 2012,positioning it as a re-imagining of webmail from the data center to the user experience, and as a better option to Gmail and Yahoo Mail.
Microsoft has also been attacking Gmail for months via its Scroogledcampaign, in which Microsoft accuses Google of disrespecting the privacy of Gmail users by matching ads to the text of their messages.
Earlier this month, Microsoft announced it had completed migrating all users from Hotmail to Outlook.com, whose improvements include a redesigned user interface, broad syncing capabilities, improved message sorting and native integration with Facebook, Twitter and other sites.
The Virginia Tech College of Engineering debuted the prototype robot, named Cyro. The life-like, autonomous robotic jellyfish weighs 170 pounds and is 5 feet 7 inches in height.
The research is backed by the U.S. Naval Undersea Warfare Center and the Office of Naval Research, which are looking for self-powering, autonomous robots to do underwater surveillance or to monitor the environment.
Cyro is the successor to the RoboJelly, a robotic jellyfish that the same research team unveiled last year. Unlike Cyro, RoboJelly is a small machine – about the size of a man’s hand, according to Virginia Tech.
“A larger vehicle will allow for more payload, longer duration and longer range of operation,” said Alex Villanueva, a doctoral student working on the project, in a statement. “Biological and engineering results show that larger vehicles have a lower cost of transport, which is a metric used to determine how much energy is spent for traveling.”
The researchers said they chose to base their underwater robots on jellyfish because of their low metabolic rate, which enables them to consume little energy.
Jellyfish also appear in many different sizes, shapes and colors, which gives scientists different designs to work with. Jellyfish also are found in every major oceanic area, which would help camouflage robots conducting surveillance around the world.
Scientists at Virginia Tech aren’t the only ones working on swimming robots.
In the summer of 2010, MIT researchers reported that they used nanotechnology to build a robot that can autonomously navigate across the surface of the ocean to clean up oil spills. Scientists hope that someday a fleet of these aquatic robots can clean up oil spills more quickly and cheaply than current methods.
In 2009, scientists at the University of Bath built a swimming robot powered by a fin instead of a more boat-like propeller.
Gymnobot, the robotic fish, has a fin that runs the length of the robot’s rigid “fish” body, undulating to make waves in the water, propelling the robot forward or backward. The robotic design is based on the Amazonian knifefish.
In this most recent work on swimming robots, Cyro is modeled and named after the jellyfish cyanea capillata, also known as the Lion’s Mane.
“We hope to improve on this robot and reduce power consumption and improve swimming performance as well as better mimic the morphology of the natural jellyfish,” Villanueva stated. “Our hopes for Cyro’s future is that it will help understand how the propulsion mechanism of such animal scales with size.”
The vulnerability is identified as CVE-2013-3336 and affects ColdFusion 10, 9.0.2, 9.0.1, 9.0 and earlier versions for Windows, Macintosh and UNIX, Adobe said in an advisory published Wednesday.
The company credited Marcin Siedlarz of Symantec’s Security Response team with reporting the issue. “There are reports that an exploit for this vulnerability is publicly available,” Adobe said.
The company is working on a fix and expects to release it publicly on May 14. Until then, customers are advised to restrict public access to certain sensitive directories like CFIDE/administrator, CFIDE/adminapi and CFIDE/gettingstarted.
Information on how to restrict access to these directories is provided in theColdFusion 9 Lockdown Guide and ColdFusion 10 Lockdown Guide. Customers who hardened their ColdFusion installations following the guidance provided in these technical documents are already protected against CVE-2013-3336, Adobe said.
Even though it’s not as widely used as some other Adobe products, ColdFusion has been targeted by hackers in the past. In April, virtual private server hosting company Linode reported that hackers gained access to its Web server and customer database by exploiting a previously unknown ColdFusion vulnerability.
In January, Adobe issued a security advisory warning customers about four previously unknown ColdFusion vulnerabilities that were being actively exploited by attackers. The mitigation steps recommended at the time also involved disabling external access to the /CFIDE/administrator and /CFIDE/adminapi directories.
The company reported in its latest 10-Q filing with the U.S. Securities and Exchange Commission that in the first quarter of the year it began rolling out restructuring programs that will mean a workforce reduction of 1,004 positions.
The layoffs are part of an $80-million initiative to cut costs. The company also listed restructuring initiatives in its first quarter earnings report.
“The actions will impact positions around the globe covering our Information Storage, RSA Information Security and Information Intelligence Group segments, and is expected to result in a total charge of approximately $80.0 million, with total cash payments associated with the plan expected to be approximately $73.0 million,” EMC stated.
In the first quarter, EMC’s VMware implemented a plan to streamline its operations. That plan includes the elimination of approximately 800 positions across all major functional groups and geographies, EMC stated.
Last year, EMC implemented separate restructuring programs to create “operational efficiencies,” which resulted in the layoffs of 1,163 positions.
The restructuring and layoffs are expected to be complete within a year of the start of each program, EMC said.
Despite the layoffs, EMC spokesperson Lesley Ogrodnick told the Boston Globe the company expects to end 2013 with more employees than it had at the start of the year.
Anti-virus software for Android is easily fooled, according to insecurity experts from Northwestern University and North Carolina State University. The university tested ten of the most popular AV products on Android, and discovered that they were easily fooled by common obfuscation techniques.
AV software from Symantec, AVG, Kaspersky Lab, Trend Micro, ESET, ESTSoft, Lookout, Zoner, Webroot, and Dr. Web was tested as part of an evaluation of mobile security software. Using a tool called DroidChameleon malware samples were transformed to generate new variants that contain the exact malicious functions as before. These new variants were then passed to the AV products, and much to the surprise of the paper’s authors, they were rarely flagged.
The paper said that the findings showed that all the anti-malware products evaluated are susceptible to common evasion techniques and may succumb to even trivial transformations not involving code-level changes. More than 43 per cent of the signatures used by the AV products are based on file names, checksums (or binary sequences) or information obtained by the PackageManager API.
Minor changes to a virus will render their protection useless for the most part.
McAfee, which is owned by Intel, is one of the biggest security vendors but has so far been focused on end-point products such as anti-virus and firewall software that runs on consumer PCs. Now the firm has made a move to go deeper into the network, buying security software vendor Stonesoft for $389m in cash.
McAfee took the surprising decision to describe its rationale behind purchasing Stonesoft. The firm said network security will become a vital security component and cited analysts’ comments on how the company is positioned in relation to its rivals, adding that Stonesoft’s products will fit with McAfee’s existing intrusion prevention and enterprise firewalling software.
McAfee president Michael DeCesare said, “Stonesoft is a leading innovator in this important market segment. We plan to integrate Stonesoft’s offerings with other McAfee products to realize the power of McAfee’s Security Connected strategy. Stonesoft products will benefit from the collective expertise of more than 7,200 McAfee employees.
“Leveraging McAfee’s cloud-based Global Threat Intelligence service will provide our combined customers with unparalleled security.”
McAfee’s parent Intel has said that security will be a key part of its strategy in the future, however coming up to two years after the firm spent $7.6bn to buy McAfee it is still not clear how it will incorporate the firm’s security software into its core silicon products.
Nevertheless, given McAfee’s ability to spend a further $389m to buy another security vendor, it seems that Intel is happy to continue spending large amounts of money to build its security business.
A former FBI counter-terrorism agent Tim Clemente appeared on CNN to claim that most of the great unwashed did not know the real capabilities and behavior of the US surveillance state. The comments stem out of anonymous government officials claiming that they are now focused on telephone calls between one of the Boston Bombers and his wife to see if she had prior knowledge of the plot or participated in any way.
The only problem with that was that if the calls were already made, how could the FBI listen to them. Tim Clemente, a former FBI counter-terrorism agent was asked about whether the FBI would be able to discover the contents of past telephone conversations between the two. He quite clearly insisted that they could.
He said that there were ways in national security investigations to find out exactly what was said in that conversation. It’s not necessarily something that the FBI is going to want to present in court, but it may help lead the investigation and/or lead to questioning of her. We certainly can find that out. He said that all of that stuff is being captured as we speak whether people know it or like it or not.
More than 20 percent of data brokers probed by the U.S. Federal Trade Commission potentially violated a U.S. privacy law when sharing personal data with agency workers posing as companies wanting to purchase information.
This week, the FTC warned 10 data brokers, most with a significant online presence, that they may be violating the Fair Credit Reporting Act (FCRA).
The FCRA requires consumer reporting agencies to reasonably verify the identities of data customers and to ensure that these customers have a legitimate purpose for receiving the information.
The FTC, in a test-shopping operation, found that the 10 data brokers appear to violate the FCRA by sharing the information without running the required checks on their data customers.
The data brokers may be subject to the FCRA because two of them appeared to offer pre-screened lists of consumers for credit offers, two others appeared to offer consumer information for making insurance decisions, and six appeared to offer employment screening information.
The FTC checked 45 data brokers during its test-shopping operation, the agency said in a press release.
Four of the companies that were sent warning letters didn’t immediately respond to requests for comment.
The FTC issued the letters in conjunction with an international privacy practice transparency sweep conducted by the Global Privacy Enforcement Network (GPEN), which connects privacy enforcement authorities from across the globe. GPEN members are focusing this week on educating companies about their obligations related to the privacy of consumers’ personal information, the FTC said.
It’s ancient history now, but once upon a time, if you wanted to play the most recent and most interesting games, you had to get up, leave the house and make your way to an arcade. Games consoles and home computers lived further down the food chain, their owners waiting for often sub-par versions of glorious arcade hits to be released on home systems. The real experience happened in an arcade.
Even to those who experienced that era, it’s a little hard to believe when you look at the sad remnants of their former glory which remain. Even in supposedly arcade-mad Japan, games generally find themselves wedged ignominiously in between gambling machines occupied by middle-aged chain-smokers and UFO Catcher booths promising, but rarely delivering, stuffed toys and sweets for bored teens on dates. In western countries, sad, lonely fighting game machines are just stuffed in where “arcade” owners ran out of fruit machines to install.
The reasons for this change are fundamentally technological. Arcade machines are big, bulky and expensive to move or replace. Once, that meant that they were vastly more powerful than home systems – but the accelerating pace of technological progress turned the size and expense of arcade machines into a liability rather than an advantage. Cheap, rapidly updated computers and consoles (and eventually even phones) first matched and then far outstripped the processing capabilities of big arcade cabinets. Rapid updates in graphics, processing, storage, networking, controls and screen resolutions were comfortably adopted by the home market, the costs buffered by cheap, cheerful hardware and absorbed by the wallets of millions of consumers. Arcade operators, faced with replacing large numbers of huge, expensive systems in order to keep track of such changes, fell behind completely.
Social factors either exacerbated or softened this blow, but these were highly region dependent. In Japan, where small family living spaces have engendered a culture in which many social activities are carried out external to the home, arcades persisted as date spots, as places to hang out with friends and – perhaps most importantly – as a venue for games too large, too noisy or too intrusive to be played in a small family home. In parts of the West, though, social factors intervened to hasten the decline, with a perception of arcades as “seedy” venues (in the grand tradition of pool halls and their ilk) discouraging many potential players, while regions with legalised gambling were quick to drop videogames in favour of more profitable slot machines.
Over the years, there has been talk of an “arcade renaissance” on several occasions, yet each time has ended in disappointment. Even as living spaces in many Western countries (the UK is a particularly notable example) have shrunk dramatically in terms of average size, Western consumers have demonstrated a continued willingness to engage with loud, bulky games. Rock Band and Guitar Hero were hugely successful as home games in the West, where their Japanese equivalents, Konami’s Guitar Freaks or Drum Mania, have acted as sustaining lifeblood for arcade venues. It’s also notable that even as Japanese arcades have innovated and invested, launching extraordinary new games which leverage all sorts of new technologies, from the company’s ultra high-speed broadband networks through to the possibilities of RFID enabled cards, the arcade sector’s health has still declined – a drop-off in footfall, revenue and floor space that’s been slower than in the West, but still isn’t exactly the rude health you might have come to believe from fawning articles about amazing Japanese arcades in the western media.
As such, it’s important to be cautious about any notion of an arcade recovery. Yet if we were to envisage any potential uplift in the fortunes of the out-of-home gaming sector, we can easily say what one key factor would be – just as in the heyday of the arcade, these venues would need to provide games which you simply cannot experience at home. This won’t come about, this time around, through more powerful graphics or processing – the trends in those areas are focused on miniaturisation and cost-efficiency, targeting the ability to put high-end 3D into phones rather than building pricey, bulky, ultra high-end systems. Instead, the focus would have to be on experiences that don’t work at home for reasons of space, budget, intrusiveness – or preferably, a combination of all of the above.
The reason I raise this issue now is because in the past few weeks, most of us will have seen videos or demonstrations of technologies which, although their creators purport to be focused on the home market, clearly fall into these categories. One is Microsoft’s Illumiroom system, which uses Kinect to map a 3D space and then projects imagery matched to that 3D map. It’s a great piece of technology with extraordinary gaming potential. It’s also abjectly unsuited to an ever-increasing number of living rooms around the world. Kinect alone is an impossibility for many players due to the space and room layout it demands; Illumiroom, demanding similar space if not more and intrusively taking over the entire room such that nobody else will be able to use it concurrently with the game being played, is simply not going to work for most people and most homes. Outside the home, though, in a dedicated venue? The potential of the technology is extraordinary, the experiences it could create serving to create a destination for gamers to experience something that just won’t work at home.
The same thought process applies, to some extent, to the Oculus Rift. It’s not that the superb VR headset hardware won’t work at home – of course it will, and it’ll probably only be a few hardware generations before the compromises presently being made in the name of cost are ironed out by technological progress. However, the “full” VR experience – with a custom controller (a gun, perhaps, or full-body motion sensing suite), a multi- directional treadmill, and so on, is simply going to be too expensive for most users – and even if prices collapsed, it’s too big and unwieldy to live in most people’s apartments. Yet the entertainment potential of such a fully-functional setup, running in parallel with a dozen other such suites so that a group of friends can explore a virtual world together, is enormous – and from a commercial perspective, not even all that space-consuming.
Of course, technology is just one factor. Technologies such as these (and I’m sure that others exist which also fall into the trap of “amazing, but it won’t work in my house”) can give a compelling reason for people to engage with out-of-home gaming – but the social factors also have to be right if an arcade renaissance is to be possible. Social factors are trickier, in many ways, than getting the hardware and the software right. Losing the seedy, unwelcoming image of the arcade in some regions will be tough; in others, where arcades have died entirely, the marketing of an entirely new social pursuit would present a major challenge. Getting people to try out something like this might be easy; getting them to see a trip to the VR centre with friends as an entertainment option on par with a trip to the cinema is likely to be much harder.
All the same, the entertainment possibilities opened up by technologies of this kind, which are now reaching a mature, usable stage in their development, ought to create an optimism around arcades and out-of-home gaming that hasn’t been seen for some time. Social or commercial aspects could still pull the rug out from any hope of recovery or renaissance – but the potential certainly exists for new kinds of gaming and interactive entertainment to take their place as key social out-of-home experiences in the coming years.
The Pentagon has cleared BlackBerry and Samsung mobile devices for use on Defense Department networks, a step toward broadening the military’s variety of technology equipment makers while still ensuring communications security.
Lieutenant Colonel Damien Pickart, a Pentagon spokesman, said the department cleared the use of BlackBerry 10 smart phones and BlackBerry PlayBook tablets using its Enterprise Service 10 system, as well as Samsung’s Android Knox.
“This is a significant step towards establishing a multi-vendor environment that supports a variety of state-of-the-art devices and operating systems,” Pickart said in a statement.
The Pentagon said last Wednesday it also expected to clear Apple mobile devices using the iOS 6 system at some point in early May.
The move to open up Defense Department networks is expected to set the stage for an intensified struggle for Pentagon customers among BlackBerry devices, Apple’s iPhones or iPads and units using Google’s Android platform such as Samsung Electronics’ phones.
The Pentagon currently has some 600,000 users of smart phones, computer tablets and other mobile devices. The department has 470,000 BlackBerry users, 41,000 Apple users and 8,700 people with Android devices. Most Apple and Android systems are in pilot or test programs.
The move to open up the networks to a broader array of mobile devices is part of a Pentagon effort to ensure the military has access to the latest communications technology without locking itself in to a particular equipment vendor.
To ensure security, mobile devices and operating systems go through a security review process approved by the Defense Information Systems Agency. Once their Security Technical Implementation Guide – or STIG – is reviewed and approved, the devices can be used on the network.
Some well-known industry analysts are suggesting that Microsoft could be behind as much as six months on software development for the Xbox Next. According to these sources, a combination of events have put Microsoft in this position, but it seems that some titles that were being developed internally have been canned. The situation led to Microsoft seeking to secure exclusives from 3rd party sources to fill in the gaps.
We first suggested a link between EA and Microsoft on some sort of an exclusive deal back when they were not a part of the Sony press conference earlier this year. Now, we find that they have a deal of some sort for the new Respawn title, which will apparently be exclusive to the Xbox 360 and Xbox Next. That’s not all, as it is expected that Microsoft has more exclusives to announce. What the question is really about is whether these are true exclusives or are just timed exclusives that we will see on the PS3/PS4 at some point in the future.
Even if Microsoft’s internal exclusives lack for the Xbox Next at launch, we expect them to catch up; we don’t see a big gap developing, but we know that Microsoft has solid properties to use on the Xbox Next and they will get those titles developed and out. No worries: it is going to be similar to all console launches where the software lacks when the system is released.
According to Internet analytics company Net Applications, Windows 8 gained just over half a percentage point of usage share in April — virtually the same as the month before — but again fell further behind the pace set in 2007 by Windows Vista, the edition most see as Microsoft’s last dud.
Windows 8′s April share, including what Net Applications labeled as “touch” for Windows 8 and Windows RT — in other words, browsing from the “Modern” user interface (UI) rather than the mouse-and-keyboard UI of the traditional desktop — was 4.2% of all Windows PCs, up from March’s 3.6%.
Even with that increase, the gap between Windows 8′s and Vista’s adoption trajectories again widened.
By the end of its sixth month, Vista powered 5.8% of all Windows PCs, or 1.6 percentage points higher than Windows 8 at the same point in its post-release timeline. April’s difference between Vista and Windows 8 was several tenths of a point larger than the month before, and the biggest so far inComputerworld‘s year-long tracking.
Windows 8′s performance was not the only bad news for Microsoft last month: Once again, Windows XP’s usage share resisted meaningful erosion, dropping by only half a percentage point.
XP’s elimination has become a top priority for Microsoft, as the 12-year-old OS faces a support retirement deadline of April 8, 2014, when the company will serve up XP’s final security update.
In April, Windows XP accounted for 41.7% of all Windows systems worldwide, down from 42.2% the month prior, Net Applications said.
Projections of Windows XP’s remaining share in April 2014 did not change. Based on its average monthly loss over the past year, XP will power 30.5% of all Windows PCs when the retirement deadline arrives.
Net Applications also reported on usage shares for Windows 7 and Vista.
The former remained flat at 48.7% of all Windows PCs, again illustrating that it hasn’t been affected by the launch of Windows 8.
Ubisoft has confirmed that Watch Dogs will arrive on November 19th in North America and November 22nd in Europe. The game is been confirmed for the Xbox 360, Xbox Next, PlayStation 3, PlayStation 4, PC, and Wii U. The release date for the PlayStation 4 version is expected to coincide with the release date of the PlayStation 4 console, so depending on its release date, the release of the PS4 version could be adjusted. (This apparently applies to the Xbox Next, as well.)
We are also being told that the PS3 version of the game will include an additional 60 minutes of exclusive game play. We are not sure if this game play will also be available for those that purchase the PS4 version, but we suspect that it will.
Four special edition versions of the game will be offered. It is not yet clear whether or not they will offer each of these special editions for each of the platforms. More details are expected to follow in the days ahead, but these look like some very nice special editions of the game, with some very nice extras being thrown in.
AMD has said the memory architecture in its heterogeneous system architecture (HSA) will move management of CPU and GPU memory coherency from the developer’s hands down to the hardware.
While AMD has been churning out accelerated processing units (APUs) for the best part of two years now, the firm’s HSA is the technology that will really enable developers to make use of the GPU. The firm revealed some details of the memory architecture that will form one of the key parts of HSA and said that data coherency will be handled by the hardware rather than software developers.
AMD’s HSA chips, the first of which will be Kaveri, will allow both the CPU and GPU to access system memory directly. The firm said that this will eliminate the need to copy data to the GPU, an operation that adds significant latency and can wipe out any gains in performance from GPU parallel processing.
According to AMD, the memory architecture that it calls HUMA – heterogeneous unified memory access, a play on unified memory access – will handle concurrency between the CPU and GPU at the silicon level. AMD corporate fellow Phil Rogers said that developers should not have to worry about whether the CPU or GPU is accessing a particular memory address, and similarly he claimed that operating system vendors prefer that memory concurrency be handled at the silicon level.
Rogers also talked up the ability of the GPU to take page faults and that HUMA will allow GPUs to use memory pointers, in the same way that CPUs dereference pointers to access memory. He said that the CPU will be able to pass a memory pointer to the GPU, in the same way that a programmer may pass a pointer between threads running on a CPU.
AMD has said that its first HSA-compliant chip codenamed Kaveri will tip up later this year. While AMD’s decision to give GPUs access to DDR3 memory will mean lower bandwidth than GPGPU accelerators that make use of GDDR5 memory, the ability to address hundreds of gigabytes of RAM will interest a great many developers. AMD hopes that they will pick up the Kaveri chip to see just what is possible.
McAfee has discovered a vulnerability in Adobe’s Reader program that allows people to track the usage of a PDF file.
“Recently, we detected some unusual PDF samples,” McAfee’s Haifei Li said in a blog post. “After some investigation, we successfully identified that the samples are exploiting an unpatched security issue in every version of Adobe Reader.”
The affected versions of Adobe Reader also include the latest “sandboxed” Reader XI (11.0.2).
McAfee said that the issue is not a “serious problem” because it doesn’t enable code execution, however it does permit the sender to see when and where a PDF file has been opened.
This vulnerability could only be dangerous if hackers exploited it to collect sensitive information such as IP address, internet service provider (ISP), or even the victim’s computing routine to eventually launch an advanced persistent threat (APT).
McAfee said that it is unsure who is exploiting this issue or why, but have found the PDFs to be delivered by an “email tracking service” provider.
“Adobe Reader will access that UNC resource. However, this action is normally blocked and creates a warning dialog,” Li said. “The danger is that if the second parameter is provided with a special value, it changes the API’s behavior. In this situation, if the UNC resource exists, we see the warning dialog.
“However, if the UNC resource does not exist, the warning dialog will not appear even though the TCP traffic has already gone.”
McAfee said that it has reported the issue to Adobe and is waiting for their confirmation and a future patch. Adobe wasn’t immediately available for comment at the time of writing.