The new desktop suite includes Access, Excel, Lync, OneNote, Outlook, PowerPoint, Publisher and Word. It can be downloaded and installed by any consumer, whether they currently have an Office edition or Office 365 subscription or not, and by business workers whose companies subscribe to an eligible Office 365 plan that has Pro Plus as part of the deal.
The latter range from Office 365 Enterprise’s E3 and E4 plans and Office 365 Education’s E3 and E4, to Office 365 Government’s E3 and E4. Some plans, such as Office 365 Business, are not eligible for this preview but will be opened to the beta later, Microsoft said.
“Since March, we’ve shared some glimpses of what’s to come in Office 2016,” Jared Spataro, the Office marketing group’s general manager, said in a blogpost. “Today, we’d like to give a more holistic view of what customers at home and work can expect in the next release.”
The March preview Spataro referred to was available only to a subset of Office 365 subscribers, and followed the release of a broader-based preview of Office 2016 for Mac weeks earlier. Because the latter was open to anyone two months before the Windows version’s audience was expanded today, it looks likely that Office 2016 for OS X will debut in final form before the Windows edition.
Microsoft said that Office 2016 for Windows would ship in the fall, the same timetable executives had shared earlier.
In a FAQ, Microsoft listed the requirements for running the preview, which include Windows 7 and later, and reminded potential testers that they had to uninstall Office 2013 before shifting to the preview. The two editions cannot be run side by side, as can the beta of Office 2016 on the Mac with the older Office 2011.
As is Microsoft’s practice for previews, support for Office 2016 remains self-serve, primarily at a peer-to-peer discussion forum.
Microsoft has not yet revealed the pricing of Office 2016 — on either Windows or OS X — nor its retail strategy for selling the suite outside Office 365 subscriptions.
Fujifilm Corp, a subsidiary of Tokyo-based Fujifilm Holdings Corp, sued Motorola in 2012, accusing the company of infringing three of its patents on digital camera functions and a fourth patent relating to transmitting data over a wireless connection such as Bluetooth.
The damages the jury ordered on Monday were lesser than the $40 million Fujifilm sought while going into the trial, which began on April 20.
The jury in San Francisco said Motorola, a unit of China’s Lenovo Group, proved that three of the disputed patents – two on face recognition and one on wifi-bluetooth were invalid. Motorola failed to prevail on a patent related to converting color images to monochrome.
“We are pleased with the verdict related to three out of the four patents and are evaluating our options on the one patent on which we did not prevail,” Motorola spokesman William Moss said in an email.
A spokeswoman for Fujifilm did not immediately respond to a request for comment.
Motorola, which Lenovo bought from Google Inc last year, had argued that the Fujifilm patents should be canceled because they were not actually new or they were obvious compared to previously patented inventions. The company also argued it already held a license to Bluetooth technology.
451 Research has revealed that proprietary cloud offerings are currently more cost effective than OpenStack.
The Cloud Price Index showed that VMware, Red Hat and Microsoft all offer a better total cost of ownership (TCO) than OpenStack distributors.
The report blames the shortfall on a lack of skilled OpenStack engineers, leading to a high price for employing them.
Commercial solutions run at around $0.10 per virtual machine hour, compared with $0.08 for OpenStack, but going commercial is cheaper when labour and other external factors are taken into account.
The report claimed that enterprises could hire an extra three percent of staff for a commercial cloud rollout and still save money.
“Finding an OpenStack engineer is a tough and expensive task that is impacting today’s cloud-buying decisions,” said Dr Owen Rogers, senior analyst at 451 Research.
“Commercial offerings, OpenStack distributions and managed services all have their strengths and weaknesses, but the important factors are features, enterprise readiness and the availability of specialists who understand how to keep a deployment operational.
“Buyers need to balance all of these aspects with a long-term strategic view, as well as TCO, to determine the best course of action for their needs.”
Enterprises need to consider whether they may end up locked into a proprietary feature which could then go up in price, or whether features may become decommissioned over time.
451 Research believes that this TCO gulf will narrow in time as OpenStack matures and the talent pool grows.
The research also suggests that OpenStack can already provide a TCO advantage over DIY solutions with a tipping point where 45 percent of manpower is saved by doing so. The company believes that the ‘golden ratio’ is 250 virtual machines per engineer.
OpenStack’s next major release, Kilo has just been released, and Ubuntu and HP are the first iterations to incorporate it.
Red Hat and Ubuntu are major contributors to the OpenStack code, in addition to their proprietary products, along with HP as part of its Helion range.
Intel has come up with technologies which it believes will give broadband a kick up the back-end.
According the Register cunning plan is to put more of its chips into modems and routers that homes and smallish businesses use to connect to the web.
Currently the gear is run by cheap and stupid technology. Embedded Linux is about the best you can expect and that cannot be customised even if you could get to it.
Intel thinks that building x86s into CPE devices will make them more interesting. It already uses Atom cores into its PUMA range of DOCSIS 3.0 cable modems, but apparently stage two involves putting it into DOCSIS 3.1 kit. This will mean that it can deliver gigabit cable Internet performance. Recently Chipzilla bought Lantiq, which makes DSL modem system-on-chips. Lantiq got some G.fast technology which is tipped to be the gigabit-speed successor to VDSL.
If Intel installs x86 cores into PUMA kit and Lantiq gear and tarts it up with a bit of visualization the home router becomes a server and the ISP can push services directly into the home. Firewalls could be run by the ISP along with some of the security defenses.
If Intel gets OpenStack running at carrier scale then chips on modems become an important part of its Internet of Stuff policy.
Microsoft has confirmed that it has acquired Israel-based N-Trig for $30 million.
Microsoft already had a 6% minority stake in N-Trig, but apparently had been in negotiations to acquire the company for months. In February, Israel’s Haaretz newspaper said N-Trig was valued at less than $10 million, while Calcalist — which originally broke the story of Microsoft’s looking to acquire the company — reported that Microsoft would pay at least $200 million.
Lenovo, Hewlett-Packard, Acer and others are among the who’s who of PC makers that turned to N-Trig for touchscreen and digitizer technology in Windows tablets and PCs. N-Trig developed a controller chip and drivers for the Windows OS that drive its touch and stylus technology.
The stylus is important to Microsoft as it looks to incorporate handwriting recognition as a standard feature in applications like Outlook and Office. Microsoft believes that natural interaction is an important way to make PCs and tablets more useful.
Microsoft next week will start shipping its Surface 3 tablet, which also has the N-Trig stylus and controller.
Several new integrations hit Google’s voice search system for Android devices last week, which allows users to conduct queries orally by first saying, “OK Google.” In addition to Zillow, Shazam, NPR and online radio service TuneIn have been integrated.
The integrations require users to have those other apps on their smartphone or tablet. But instead of having to open the apps individually, users can ask their mobile device directly, which, hopefully, will then take them inside the appropriate app with an answer.
The Zillow integration will work for local searches as well as housing searches in other cities. Users, for instance, can search for homes for sale, for rent, or for open houses. “Show apartments for rent in Boston on Zillow,” you could say.
With TuneIn, users can say, “Open TuneIn in car mode.” That would be helpful for hands-free driving.
Through the Shazam integration, people can ask their smartphone what song is playing around them, by saying, “OK Google, Shazam this song.” (Google voice search already lets people identify songs without Shazam.)
Since Google opened outside support for its OK Google voice search in 2014, developers of many other third party apps have integrated their apps into the system. There’s also tons of useful commands that can be conducted using Google’s own smarts.
Its functionalities extend beyond search. The “OK Google” command is also now a key element in how people interact with smartwatches running Google’s Android Wear operating system.
Samsung has been updating its operating system which sounds like a sneeze – Tizen.
Samsung users of the Z1 mostly in India and Bangladesh have noticed a new update provided this week marks the beginning of a new chapter for Tizen.
The over-the-air (OTA) update was 16.1MB and normally would not have been a big deal but it seems to bring Samsung’s Tizen-powered smartphone to Z130HDDU0BOD8.
OK the update does not do much, but it does prove that keeping Tizen running fast and smooth is at the forefront of Samsung’s plans.
Samsung bought the update having rolled out the Tizen store globally, with 182 new countries added to the list of Tizen store-accessible locations (Netherlands, UK, US, France, Russia, Australia, Malaysia, Serbia, Croatia, Thailand, Philippines, South Africa, UAE, and others).
Global Z1 users can access the Tizen store and free apps, they cannot access paid apps at the moment.
The expectation is that Samsung prepping its Tizen store for a global rollout, and will start to roll out Tizen-powered devices worldwide in the months to come.
This brave new world might arrive with a Z2 or perhaps some more Tizen based smartphones.
Valve is no stranger to its ventures having a somewhat rocky start. Remember when the now-beloved Steam first appeared, all those years ago? Everyone absolutely loathed it; it only ever really got off the ground because you needed to install it if you wanted to play Half-Life 2. It’s hard now to imagine what the PC games market would look like if Valve hadn’t persisted with their idea; there was never any guarantee that a dominant digital distribution platform would appear, and it’s entirely plausible that a messy collection of publisher-owned storefronts would instead loom over the landscape, with the indie and small developer games that have so benefited from Steam’s independence being squeezed like grass between paving stones.
That isn’t to say that Valve always get things right; most of the criticisms leveled at Steam in those early days weren’t just Luddite complaints, but were indeed things that needed to be fixed before the system could go on to be a world-beater. Similarly, there have been huge problems that needed ironing out with Valve’s other large feature launches over the years, with Steam Greenlight being a good example of a fantastic idea that has needed (and still needs) a lot of tweaking before the balance between creators and consumers is effectively achieved.
You know where this is leading. Steam Workshop, the longstanding program allowing people to create mods (or other user-generated content) for games on Steam, opened up the possibility of charging for Skyrim mods earlier this month. It’s been a bit of a disaster, to the extent that Valve and Skyrim publisher Bethesda ended up shutting down the service after, as Gabe Newell succinctly phrased it, “pissing off the Internet”.
There were two major camps of those who complained about the paid mods system for Skyrim; those who objected to the botched implementation (there were cases of people who didn’t own the rights to mod content putting it up for sale, of daft pricing, and a questionable revenue model that awarded only 25% to the creators), and those who object in principle to the very concept of charging for mods. The latter argument, the more purist of the two, sees mods as a labour of love that should be shared freely with “the community”, and objects to the intrusion of commerce, of revenue shares and of “greedy” publishers and storefronts into this traditionally fan-dominated area. Those who support that point of view have, understandably, been celebrating the forced retreat of Valve and Bethesda.
Their celebrations will be short-lived. Valve’s retreat is a tactical move, not a strategic one; the intention absolutely remains to extend the commercial model across Steam Workshop generally. Valve acknowledges that the Skyrim modding community, which is pretty well established (you’ve been able to release Steam Workshop content for Skyrim since 2012), was the wrong place to roll out new commercial features – you can’t take a content creating community that’s been doing things for free for three years, suddenly introduce experimental and very rough payment systems, and not expect a hell of a backlash. The retreat from the Skyrim experiment was inevitable, with hindsight. With foresight, the adoption of paid mods more broadly is equally inevitable.
Why? Why must an area which has thrived for so long without being a commercial field suddenly start being about money? There are a few reasons for the inevitability of this change – and, indeed, for its desirability – but it’s worth saying from the outset that it’s pretty unlikely that the introduction of commercial models is going to impact upon the vast majority of mod content. The vast majority of mods will continue to be made and distributed for free, for the same reasons as previously; because the creator loves the game in question and wants to play around with its systems; because a budding developer wants a sandbox in which to learn and show off their skills to potential employers; because making things is fun. Most mods will remain small-scale and will, simply, not be of commercial value; a few creators will chance their arm by sticking a price tag on such things, but the market will quickly dispose of such behaviour.
Some mods, though, are much more involved and in-depth; to realise their potential, they impact materially and financially upon the working and personal lives of their creators. For that small slice out of the top of the mod world, the introduction of commercial options will give creators the possibility of justifying their work and focus financially. It won’t make a difference at all to very many, but to the few talented creative people who will be impacted, the change to their lives could be immense.
This is, after all, not a new rule that’s being introduced, but an old, restrictive one that’s being lifted. Up until now, it’s effectively been impossible to make money from the majority of mods. They rely upon someone else’s commercial, copyrighted content; while not outright impossible technically, the task of building a mod that’s sufficiently unencumbered with stuff you don’t own for it to be sold legally is daunting at best. As such, the rule up until now has been – you have to give away your mod for free. The rule that we’ll gradually see introduced over the coming years will be – you can still give away your mod for free, but if it’s good enough to be paid for, you can put a price tag on it and split the revenue with the creator of the game.
That’s not a bad deal. The percentages certainly need tweaking; I’ve seen some not unreasonable defences of the 25% share which Bethesda offered to mod creators, but with 30% being the standard share taken by stores and other “involved but not active” parties in digital distribution deals, I expect that something like 30% for Steam, 30% for the publisher and 40% for the mod creator will end up being the standard. Price points will need to be thrashed out, and the market will undoubtedly be brutal to those who overstep the mark. There’s a deeply thorny discussion about the role of F2P to be had somewhere down the line. Overall, though, it’s a reasonable and helpful freedom to introduce to the market.
It’s also one which PC game developers are thirsting for. Supporting mod communities is something they’ve always done, on the understanding that a healthy mod scene supports sales of the game itself and that this should be reward enough. By and large, this will remain the rationale; but the market is changing, and the rising development costs of the sort of big, AAA games that attract modding communities are no longer being matched by the swelling of the audience. Margins are being squeezed and new revenue streams are essential if AAA games are going to continue to be sustainable. It won’t solve the problems by itself, or overnight; but for some games, creating a healthy after-market in user-generated content, with the developer taking a slice off the top of the economy that develops, could be enough to secure the developer’s future.
Hence the inevitability. Developers need the possibility of an extra revenue stream (preferably without having to compromise the design of their games). A small group of “elite” mod creators need the possibility of supporting themselves through their work, especially as the one-time goal of a studio job at a developer has lost its lustre as the Holy Grail of a modder’s work. The vast majority of gamers will be pretty happy to pay a little money to support the work of someone creating content they love, just as it’s transpired that most music, film and book fans are perfectly happy to pay a reasonable amount of money for content they love when they’re given flexible opportunities to do so.
Paid mods are coming, then; not to Skyrim and probably not to any other game that’s already got an established and thriving mod community, but certainly to future games with ambitions of being the next modding platform. Valve and its partners will have to learn fast to avoid “pissing off the Internet” again; but for those whose vehement arguments are based on the non-commercial “purity” of this corner of the gaming world, enjoy it while it lasts; the reprieve won this week is a temporary one.
A California civil liberties group unveiled a mobile application that will allow bystanders to record cell phone videos of possible cases of police misconduct and then quickly save the footage to the organization’s computer servers.
The California chapter of the American Civil Liberties Union said the app will send the video to the organization and preserve it even if a phone is seized by police or destroyed.
The launch of the ACLU’s “Mobile Justice CA” app comes as law enforcement agencies face scrutiny over the use of lethal force, especially against African-Americans, following several high-profile deaths of unarmed black men in encounters with police over the last year in the United States.
“It’s critical that people understand what is being done by police officers, because what is being done is being done in the name of the public,” said Hector Villagra, executive director of the ACLU of Southern California.
The app is targeted at residents of the most populous U.S. state, but ACLU chapters have launched similar mobile apps in at least five other states, including New York, Missouri and Mississippi over the last three years.
It also sends an alert to anyone with the app who might be in the area, giving them an opportunity to go to the location and observe, the ACLU said.
Villagra said the ACLU, in looking at which cases to delve into more deeply, will prioritize those that come with a written report, which is another element users can submit through the app. Records of incidents from users living in other states will be sent to ACLU officials there, he said.
ACLU officials advised anyone interacting directly with officers who wants to use the app to announce they are reaching for a phone, because officers might mistake the device for a weapon.
A representative from the California Peace Officers Association declined to comment immediately on the app.
The browser developer decided after a discussion on its community mailing list that it will set a date after which all new features will be available only to secure websites, wrote Firefox security lead Richard Barnes in a blog post. Mozilla also plans to gradually phase out access to browser features for non-secure websites, particularly features that could present risks to users’ security and privacy, he added.
The community has to still agree on what new features will be blocked for non-secure sites. Firefox users could, for instance, still be able to view non-secure websites. But those websites would not get access to new features such as access to new hardware capabilities, Barnes said.
“Removing features from the non-secure Web will likely cause some sites to break. So we will have to monitor the degree of breakage and balance it with the security benefit,” he said, adding that Mozilla is already considering less severe restrictions for non-secure websites to find the right balance. At the moment, Firefox already blocks, for example, persistent permissions from non-secure sites for access to cameras and phone.
Mozilla’s move follows the introduction of “opportunistic encryption” to Firefox last month, which provides encryption for legacy content that would otherwise have been unencrypted.
“Permanent or temporary changes to your skin, such as some tattoos, can also impact heart rate sensor performance. The ink, pattern, and saturation of some tattoos can block light from the sensor, making it difficult to get reliable readings,” Apple said.
Some watch functions require direct contact with the skin to work. If the device can’t detect a pulse, it assumes it isn’t being worn, shutting downs apps and requiring people to enter their passcode. Turning off the wrist-detection function solves the issue, but prevents people from using Apple Pay.
Reports emerged this week that people with dark wrist tattoos were experiencing problems with their Apple Watches. One watch owner with a tattoo on his left wrist said the device would lock, preventing him from receiving notifications. He initially thought the device’s sensors were defective.
But when he placed the watch on his hand, which isn’t tattooed, he was able to get text notifications.
Green LED lights and photodiode sensors on the back of the watch measure the amount of blood flowing through a person’s wrist, using a technology called photoplethysmography, according to the support page. Blood absorbs green light, and by flashing the LED lights, the watch can measure blood flow and then calculate a person’s heart rate. When the watch is unable to get a read, it increases LED brightness and sampling rates, Apple said.
Apple reminded users that Bluetooth-equipped external heart rate monitors, like chest straps, can be connected to the watch to obtain vital readings.
Apple didn’t immediately reply to a request for further comment.
IBM has taken another big step in its quest to create a true quantum computer.
The company has created a 2×2 quantum bit (qubit) superconductor chip which, when chilled to a fraction above absolute zero, becomes capable of carving through calculations like a knife through digital butter.
The chip would need to connect hundreds of qubits, rather than just two, to be a viable infrastructure for supercomputing, but it represents an important step forward in the quest for a machine that can go where traditional silicon machines simply cannot.
Arvind Krishna, senior vice president and director of IBM Research, said: “Quantum computing could be potentially transformative, enabling us to solve problems that are impossible or impractical to solve today,
“While quantum computers have traditionally been explored for cryptography, one area we find very compelling is the potential for practical quantum systems to solve problems in physics and quantum chemistry that are unsolvable today.
“This could have enormous potential in materials or drug design, opening up a new realm of applications.”
IBM, Google and Nasa are all working on quantum computing projects, and further research is being conducted by the US government.
But a research paper from the Swiss Federal Institute of Technology last year suggested that the Google-Nasa D-Wave 2 showed no significant speed improvement over a traditional supercomputer.
Quantum computing allows bits to exist simultaneously in a state of ‘zeroness’ and ‘oneness’ (think Schrödinger’s cat) which allows it to tackle complex problems at phenomenal speed. Many believe that it is the key to true artificial intelligence.
The big step comes with the ability to arrange qubits in a grid. Previous attempts involved them all being in a line, which starts to get impractical when done at scale.
But the big problem is the sheer instability of qubits. Between interference and the fact that they operate only at a temperature that can’t support life, it’s safe to say that you won’t be seeing them in the Samsung Galaxy S7.
IBM announced last summer that it planned to pump $3bn into research to find the successor to the silicon chip.
The UK government pledged $20m to train quantum researchers earlier this year, claiming that the results could add $1.5bn to the UK economy.
Microsoft warned that the software is pretty rough around the edges, but Steve Teixeira, director of programme management for Internet of Things (IoT) at the firm, said that Microsoft wanted to give makers “the opportunity to play with the software bits early” to get feedback on what’s good, and what’s not.
Teixeira said: “We’re embracing the simple principle of helping makers and device builders do more by bringing our world-class development tools, the power of the Universal Windows Platform, direct access to hardware capabilities, and the ability to remotely debug, update and manage the software running on Raspberry Pi 2 devices.”
“You may notice some missing drivers or rough edges. We look forward to receiving your feedback to help us prioritise our development work, he added, noting that a final version of the software will be made available this summer.
We’re going to put our money on a late July release.
Raspberry Pi also offers developers some pre-download tips in a blog post. As well as echoing Microsoft’s warning that it’s likely to be buggy, Liz Upton, head of communications at Raspberry Pi, said that you’ll need to be signed up to the Windows Insider program and have Windows 10 installed on your PC.
Running Windows 10 on a virtual machine won’t offer compatibility for the IoT release as you need access to the SD card reader.
Microsoft also showed off a Raspberry Pi-powered robot during the Build keynote on Tuesday to demonstrate how its HoloLens headset can bring such devices to life.
The demo (below) showed HoloLens overlaying a holographic robot named B15 on top of a physical one made using Raspberry Pi 2, and displaying how far the robot traveled, its battery life, wireless connection, temperature and other variables.
Microsoft also announced at Build that it has signed a partnership with Arduino making Windows 10 the first Arduino-certified operating system.
The new initiatives include alliances with the Smart Cities Council and the Thrive Accelerator mentorship program to promote smart farming. Verizon is also a partner in an AgTech Summit coming in July with Forbes.
Dan Feldman, Verizon’s director of IoT Smart Cities, said city leaders in the U.S. are interested in investing in smart streetlights, car sharing and smart parking to find greater efficiencies. Verizon last year created an Auto Share service to connect drivers to vehicles via Verizon’s 4G LTE network.
Verizon has been active in a number of connected services in cities for years. In Charlotte, N.C., Verizon joined with Duke Energy to connect buildings in the commercial district with kiosks that help the community track energy consumption. People can also connect via social media alerts. Over two years, Charlotte has been able to reduce power consumption by 8.4%, at a savings of $10 million, Verizon said.
Smart cities and farms are more than buzz words. Cities are increasingly willing to invest in new IoT technology and wireless carriers and network providers have been actively involved. In Kansas City, Mo., last week, the City Council voted to authorize a contract with Cisco and its partners that envisions video sensors, free public Wi-Fi, 25 interactive kiosks, and smart lighting along a 2.2 mile-streetcar line that’s under construction in the downtown area.
Japanese consumer electronics maker Sony Corp expects operating profit to more than quadruple this year, as strong sales of camera sensors and cost reductions anchor a much needed turnaround after years of losses on TVs and mobile phones.
Sony said on Thursday it estimates operating profit will jump in the year ending March 2016 to 320 billion yen ($2.7 billion). For the previous fiscal year, operating profit was 68.5 billion, in line with an April 22 forecast.
This year’s earnings would be Sony’s biggest annual operating profit in seven years, though well below an average analyst forecast of 408 billion yen, according to Thomson Reuters. Achieving it would mark another milestone in Chief Executive Kazuo Hirai’s long haul to pull one of Japan’s most iconic technology firms out of heavy losses, squeezed by cheaper and more nimble rivals in mass consumer electronics.
Under Hirai’s direction, Sony has reshaped itself to target expansion in lucrative new areas such as sensors used in cameras for popular devices like Apple Inc’s iPhones. That strategy has vexed some former executives who have urged Hirai to focus on innovation, not cost cuts.
“We are emerging from losses but still recuperating,” Chief Financial Officer Kenichiro Yoshida told reporters on Thursday, saying Sony was being cautious in forecasting to break with past habits.
“In the past seven years, we revised (earnings guidance) downwards around 15 times,” he said, citing fluctuations in foreign exchange rates as a major concern.
As part of its restructuring, Sony has exited PCs and spun off its TV business. It also plans to split off its audio and video business in an effort to hold subsidiaries more accountable for making a profit.
Investors have welcomed the new-look Sony. Shares have risen more than 30 percent in 2015, and year-on-year, the stock has nearly doubled, hitting 3,827.50 yen earlier this month, its highest since 2008.