A Stanford engineering team has built a radio, equipped with sensors, computational units and antennas one-tenth the size of Wi-Fi antennas, that is able to gain all the power it needs from the same electromagnetic waves that carry signals to its receiving antenna. No batteries are required.
These radios, which are designed to compute, execute and relay commands, could be the key to linking gadgets together in the increasingly popular idea of the Internet of Things.
Today’s radios generally are the size of a quarter, according to Amin Arbabian, assistant professor of electrical engineering at Stanford and a researcher on the radio project. These new radios are much smaller. They’re 3.7 x 1.2 millimeters.
Radios that small could be added to everything from $100 bills to medical gauze, Band-Aids and home appliances. At just pennies per radio, that means a myriad of products could easily and cheaply become part of a linked network.
“This could be very important,” Arbabian told Computerworld. “When you think about the Internet of Things, you’re talking about needing a thousand radios per person. That counts all the radios and devices you’d need around you in your home and office environments. With 300 million people in the U.S., we’d have 300 billion radios.”
A Bluetooth-type radio works fine for smartphones but is too big and expensive to connect most of the objects in users’ lives.
“We needed the cost and size to go down, and you need scale,” said Arbabian, who began working on the project in 2011. “Do you want to put something the size of a Bluetooth radio on a Band-Aid? It’s too big. It costs a lot. The technology we have today for radios doesn’t meet any of these requirements.”
He explained that a tiny radio with a temperature sensor could be put on a bandage or piece of adhesive that’s applied to every patient that enters a hospital. The radio and its sensor would enable the medical staff to continuously track every patient’s temperature, a key health indicator, effortlessly and cheaply.
Sensors also could be used to measure air quality, to track medications from the manufacturer to the end user and to even keep track of tools and supplies in an operating room. For instance, Arbabian noted that a radio, encased in bio-safe material, could be attached to gauze or medical tools. With them, everything in an operating room could be tracked to ensure that nothing is left inside the patient at the end of surgery.
The radios also could be attached to every day products inside the home, including appliances, doors and windows.
The Supreme Court’s June ruling on the patentability of software raised as many questions as it answered. One specific software patent went down in flames in the case of Alice v. CLS Bank, but the abstract reasoning of the decision didn’t provide much clarity on which other patents might be in danger.
Now the lower courts appear to be bringing the ruling’s practical consequences into focus and it looks like software patents are getting a kicking. There have been 11 court rulings on the patentability of software since the Supreme Court’s decision and each of them has led to the patent being invalidated.
In the late 1990s and early 2000s, the Patent Office handed out a growing number of what might be called “do it on a computer” patents. These patents take some activity that people have been doing for centuries — say, holding funds in escrow until a transaction is complete — and claim the concept of performing that task with a computer or over the internet. The patents are typically vague about how to perform the task in question.
The Supreme Court invalidated a patent which claimed that it’s owners invented the concept of using a computer to hold funds in escrow to reduce the risk that one party would fail to deliver on an agreement. The Supreme Court ruled that the use of a computer did not turn this centuries-old concept into a new invention.
This has lead to lots of other patents being declared llegal. On July 6, a Delaware trial court rejected a Comcast patent that claimed the concept of a computerized telecommunications system checking with a user before deciding whether to establish a new connection. The court said that the patent could easily be performed by human beings making telephone calls.
Basically this means that you can’t take a normal human activity, do it with a computer and call it an patentable invention.
It would kill off the famous one click patent if that were ever challenged.
The MEMS-IGZO display, being developed under a 2012 tie-up with Qualcomm subsidiary Pixtronix, could be used in smartphones and tablets as well as larger displays.
Compared to current LCDs, MEMS-IGZO technology can operate without blurring the image in temperatures as low as -30 C (-32 F), offers better color purity and gamut, and has ultra-low power consumption.
Depending on usage, devices could run for twice as long using the new displays instead of LCD, said Pixtronix President Greg Heinzinger.
The “programmable display” can change power usage depending on whether the user is looking at a video or an e-book, for instance, Heinzinger said, adding that most display technologies use the same power regardless of the content. Color gamut, depth and fidelity can also be modified depending on use.
Power efficiency will become a crucial feature of next-generation displays because resolution has basically reached the limits of perception of the human eye, Sharp Devices Group Chief Officer Norikazu Hohshi told the briefing.
The company is licensing MEMS (microelectromechanical systems) technology from Pixtronix. Qualcomm has long been trying to make the technology popular, and commercialized its related Mirasol low-power display in its Toq smartwatch last year.
MEMS displays work in a fundamentally different way than LCDs. Thousands of miniature shutters, as tiny as one per pixel, modulate light emitted from RGB LEDs to produce different colors. It takes only 100 microseconds for the shutters to move and the system has a faster reaction time than LCD pixels, which are each paired with a color filter to allow either red, blue or green light to pass.
IGZO (indium gallium zinc oxide) refers to Sharp’s semiconductor technology used with the MEMS shutters. The MEMS-IGZO displays can be built using existing LCD manufacturing infrastructure, which would be a cost benefit.
That’s the logic behind Ericsson’s planned $95 million acquisition of Fabrix Systems, which sells a cloud-based platform for delivering DVR (digital video recorder), video on demand and other services.
The acquisition is intended to help service providers deliver what Ericsson calls TV Anywhere, for viewing on multiple devices with high-quality and relevant content for each user. Cable operators, telecommunications carriers and other service providers are seeing rapid growth in video streaming and want to reach consumers on multiple screens. That content increasingly is hosted in cloud data centers and delivered via Internet Protocol networks.
Fabrix, which has 103 employees in the U.S. and Israel, sells an integrated platform for media storage, processing and delivery. Ericsson said the acquisition will make new services possible on Ericsson MediaFirst and Mediaroom as well as other TV platforms.
Stockholm-based Ericsson expects the deal to close in the fourth quarter. Fabrix Systems will become part of Ericsson’s Business Unit Support Solutions.
Other players usually associated with data networks are also moving into the once-specialized realm of TV. At last year’s CES, Cisco Systems introduced Videoscape Unity, a system for providing unified video services across multiple screens, and at this year’s show it unveiled Videoscape Cloud, an OpenStack-based video delivery platform that can be run on service providers’ cloud infrastructure instead of on specialized hardware.
Hewlett-Packard Co is taking a look at putting its web-based photo sharing service Snapfish on the block, and has held discussions with multiple private equity and industry buyers, a person with knowledge of the situation said.
Snapfish, which HP bought for more than $300 million in 2005 and currently sits within its printing and personal systems group, is considered non-core for the company, the person said, asking not to be named because the matter is not public.
A spokesman for HP declined to comment.
Last year, HP replaced the printing and personal business’ long-time head Todd Bradley with former Lenovo executive Dion Weisler. Bradley has since left the technology company, to join Tibco Software Inc as its president.
Some of the parties that have been eyeing Snapfish have also expressed interest in buying another online photo-sharing services provider, Shutterfly Inc, the person said.
Shutterfly hired Frank Quattrone’s Qatalyst Partners over the summer to find a buyer, and is expected wrap up its process in the next several weeks, people familiar with the matter have said previously.
Finally, Ubisoft has a release date for the Wii U version of Watch Dogs. While we don’t know if that many people are waiting for the Wii U version, when it does release it could very well end up being one of the last M rated titles for the Wii U console.
The release date for the Wii U version of Watch Dogs appears to be November 18th in North America and November 21st in Europe. This ends the original release delay that Ubisoft announced for the Wii U version as resources were moved to prepare the other versions of the game for release.
Ubisoft has been one of the strongest supports of software for the Wii U, but recently it announced that it was done producing titles like Assassins Creed and Watch Dogs for the Wii U because the sales of these M rated titles are just not there on the Wii U platform. It did indicate that it would focus on some of its other Wii U titles that continue to be popular on the console.
The news is good that they are getting Watch Dogs, but it looks like we will not see many more games like this on the Wii U.
SanDisk has released more details about its joint venture with Dell, which will see DAS Cache SSD caching software from Sandisk added to Dell’s Poweredge servicers.
Sandisk’s director of Software Marketing Rich Petersen told The INQUIRER, “We’re excited to be announcing a co-venture with a brand with Dell’s credentials that offers platform independent, brand independent caching.”
Sandisk DAS Cache is a pure software caching system that uses flash memory to improve latency at the server level by up to 37 times.
Network managers can choose between apportioning part of, or a full dedicated SSD, for caching, with the software algorithms controlling the flow of data without the need for any additional hardware.
“An all software solution allows anyone to take advantage of caching technology without the need for engineering knowledge or previous experience of configuration” continued Petersen.
Users can create up to four different cache pools with different prioritisations to create quality of service (QoS) infrastructure. Cache persistence ensures that even if the server is rebooted the speed boost is maintained.
Sandisk DAS Cache will be available in the newly announced range of Dell servers that were announced at IDF in San Francisco. However, users are not required to use Sandisk SSDs in the system, as the software works with all disk manfacturers’ products.
At launch the system is in place for Windows and Linux based systems, with VMware set to follow in 2015. The system also supports hypervisors including Microsoft Hyper-V.
Sandisk has made a number of advances in the enterprise market this year, including the first 4TB capacity solid-state disk (SSD) drive, and a dedicated SSDs for business laptops.
Approximately 14 million ultra-high definition (UHD) 4K2K television sets have been shipped worldwide in 2014, penetrating 6-7% of the overall TV market, according to WitsView, a subsidiary of Taiwan-based market intelligence firm TrendForce.
Chinese vendors, including Skyworth, Changhong and Hisense, have the highest shipment rates. The six largest Chinese brands, which also include Konka, TCL and Haier, will achieve a 13-15% penetration rate in the UHD TV market this year, the firm projects.
The spec of 4K2K TV means 3,840 X 2,160 pixel resolution compared with HD TV, which has a resolution of 1,920 X 1,080. UHD TV has four times the resolution of HDTV.
“China’s six major 4K2K TV brands price their products very competitively,” Anita Wang, a research manager at WitsView, said in a statement. “Other vendors can’t offer such an attractive price proposition.”
Last month, the retail price difference in China between 65-in 4K2K 3D and HD 3D TVs was 32%, but in other markets it was as high as 63%, Wang said. As a result, Chinese consumers are more willing to purchase 4K2K televisions, Wang added.
One of the biggest issues facing the UHD TV market is a lack of “available” content. That’s not to say there aren’t plenty of 4K movies and TV shows ready to be streamed to the public. Since 2004, the movie and television industry has been producing 4K content for the digital market.
“Broadcasters will always use the best equipment they can, because they want to be able to archive and repurpose that content in the future. But that’s a long ways from saying they have 4K content in the production chain,” said Paul Gray, director of TV Electronics Research at DisplaySearch.
Buying a 4K UHD TV today requires a leap of faith in two ways: You need to believe broadcasters will begin streaming 4K content soon and feel confident that the content will conform to a standard a new UHD TV can decode and process.
“Neither of those things are clear because there are no standards for 4K video,” Gray said.
LCD computer monitors are also starting to become available in UHD and feature attractive price tags, she said. For example, the 28-in 4K2K monitor retailed at an average of just $630 in August. In the coming months, panel makers will continue to introduce new 4K2K monitors in different sizes.
For example, Samsung is expected to launch a 23.6-in model that will be priced lower than the existing 23.8-in model. That will help to further drive down retail prices and stimulate 4K2K monitor demand.
Meanwhile, Apple is expected to release the 27-in 5K3K high-resolution iMac by the end of the fourth quarter of 2014.
You can’t accuse eSports League CEO Ralf Reichert of always telling people what they want to hear. At last month’s FanExpo Canada in Toronto, Ontario, just a few blocks away from the Hockey Hall of Fame, Reichert told GamesIndustry.biz that he saw competitive gaming overtaking the local pastime.
“Our honest belief is it’s going to be a top 5 sport in the world,” Reichert said. “If you compare it to the NHL, to ice hockey, that’s not a first row sport, but a very good second-row sport. [eSports] should be ahead of that… It’s already huge, it’s already comparable to these traditional sports. Not the Super Bowl, but the NHL [Stanley Cup Finals].”
Each game of this year’s Stanley Cup Finals averaged 5 million viewers on NBC and the NBC Sports Network. The finals of the ESL Intel Extreme Masters’ eighth season, held in March in Katowice, Poland, drew 1 million peak concurrent viewers, and 10 million unique viewers over the course of the weekend. That’s comparing the US audience for hockey to a global audience for the IEM series, but Reichert said the events are getting larger all the time.
As for how eSports have grown in recent years, the executive characterized it as a mostly organic process, and one that sometimes happens in spite of the major players. One mistake he’s seen eSports promoters make time and again is trying to be too far ahead of the curve.
“There have been numerous attempts to do celebrity leagues as a way to grow eSports, to make it more accessible,” Reichert said. “And rather than focusing on the core of eSports, the Starcrafts and League of Legends of the world, people tried to use easy games, put celebrities on it, and make a classic TV format out of it.”
One such effort, DirecTV’s Championship Gaming Series, held an “inaugural draft” at the Playboy Mansion in Beverly Hills and featured traditional eSports staples like Counter-Strike: Source alongside arguably more accessible fare like Dead or Alive 4, FIFA 07, and Project Gotham Racing 3.
“They put in tens of millions of dollars in trying to build up a simplified eSports league, and it was just doomed because they tried to simplify it rather than embrace the beauty of the apparent complexity.”
Complexity is what gives established sports their longevity, Reichert said. And while he dismisses the idea that eSports are any more complex than American football or baseball, he also acknowledged there is a learning curve involved, and it’s steep enough that ESL isn’t worrying about bringing new people on board.
“It’s tough for generations who didn’t grow up with gaming to get what Starcraft is,” Reichert said. “They need to spend 2-10 hours with it, in terms of watching it, getting it explained, and getting educated around it, or else they still might have that opinion. Our focus is more to have the generations who grew up with it as true fans, rather than trying to educate people who are outside of this conglomerate… There have been numerous attempts to make European soccer easier to approach, or American football, or baseball, but they all kill the soul of the actual sport. Every attempt to do that is just doomed.”
Authenticity is what keeps the core of the audience engaged, Reichert said. And even though there will always be purists who fuss over every change–Reichert said changing competitive maps in Starcraft could spark a debate like instant replay in baseball–being true to the core of the original sport has been key for snowboarding, mixed martial arts, and every other successful upstart sport of the last 15 years.
“Like with every new sport, the biggest obstacle has been people not believing in it,” Reichert said. “And it goes across media, sponsorships, game developers, press, everyone. The acceptance of eSports was a hard fought battle over a long, long time, and there’s a tipping point where it goes beyond people looking at it like ‘what the hell is this?’ And to reach that point was the big battle for eSports… The thing is, once we started to fill these stadiums, everyone looking at the space instantly gets it. Games, stadiums, this is a sport. It’s such a simple messaging that no one denies it anymore who knows about the facts.”
That’s not to say everybody is convinced. ESPN president John Skipper recently dismissed eSports as “not a sport,” even though his network streamed coverage of Valve’s signature Dota 2 tournament earlier this year. Reichert admitted that mainstream institutions seem to be lagging behind when it comes to acceptance, particularly with sponsors. While companies within the game industry are sold on eSports, non-endemic advertisers are only beginning to get it.
“The very, let’s say progressive ones, like Red Bull, are already involved,” Reichert said. “But to get it into the T-Mobiles and other companies as a strategy piece, that will still take some time. The market in terms of the size and quality of events is still ahead of the sponsorship, but that’s very typical.”
Toronto was the second stop for ESL’s IEM Season 9 after launching in Shenzhen July 16. The league is placing an international emphasis on this year’s competition, with additional stops planned in the US, Europe, and Southeast Asia.
While we would not call Alan Wake from developer Remedy Entertainment a disappointment, we would say that it took a long time to make, cost a lot of money, and didn’t quite live up to what everyone though it would be in the end.
The one thing about Alan Wake has been however, that over time it has perhaps gained a bit of a following. Creative director Sam Lake from Remedy has been quoted as saying that, “while the sequel for Alan Wake didn’t work out at this point, but we are definitely are looking for opportunities to do more with Alan Wake when the time is right.”
As for when the time might be right, that is really hard to say. We know right now that the studio is hard at work on Quantum Break which is on track for a 2015 release, so we don’t think we are going to see a squeal anytime soon. The good news for fans is that it does seem that there is at least interest in a squeal.
Crytek will be self-publishing the PC version of Ryse: Son of Rome that will be released on Steam starting on October 10th. Crytek promises a benchmark for PC gaming graphics with support for 4K resolution.
The PC version promises a number of graphics enhancements over the Xbox One release and Crytek claims that they have given the developers the chance to really show what the Crytek engine can do without compromising quality thanks to the hardware available today.
To run the PC version of Ryse, Crytek is requiring a dual core processor 2.8Ghz Intel/3.2GHz AMD, 4GB of RAM, 64bit Windows 7/8, DirectX 11 compatible graphics card with at least 1GB of video ram and 26GB of hard drive space. For the best experience Crytek recommends Quad Core Intel processor/Octo-Core AMD processor, 8GB of RAM, 64bit Windows 7/8, DirectX 11 graphics card with 4GB of video RAM, and 26GB hard disk space.
The PC release of Ryse is said to include all of the DLC content. While it sure was a graphics show piece for the Xbox One, reviews of the game were kind of mixed. Still the PC release could be just what the doctor ordered for Ryse to gain some new players. In addition to the Steam release, we still are hearing that a boxed release is coming as well, but we don’t have any specifics on that just yet.
The chips will be in five to seven detachable tablets and hybrids by year end, and the number of devices could balloon to 20 next year, said Andy Cummins, mobile platform marketing manager at Intel.
Core M chips, announced at the IFA trade show in Berlin on Friday, are the first based on the new Broadwell architecture. The processors will pave the way for a new class of thin, large-screen tablets with long battery life, and also crank up performance to run full PC applications, Intel executives said in interviews.
“It’s about getting PC-type performance in this small design,” Cummins said. “[Core M] is much more optimized for thin, fanless systems.”
Tablets with Core M could be priced as low as US$699, but the initial batch of detachable tablets introduced at IFA are priced much higher. Lenovo’s 11.6-inch ThinkPad Helix 2 starts at $999, Dell’s 13.3-inch Latitude 13 7000 starts at $1,199, and Hewlett-Packard’s 13.3-inch Envy X2 starts at $1,049.99. The products are expected to ship in September or October.
Core M was also shown in paper-thin prototype tablets running Windows and Android at the Computex trade show in June. PC makers have not expressed interest in building Android tablets with Core M, but the OS can be adapted for the chips, Cummins said.
The dual-core chips draw as little as 4.5 watts, making it the lowest-power Core processor ever made by Intel. The clock speeds start at 800MHz when running in tablet mode, and scales up to 2.6GHz when running PC applications.
The power and performance characteristics make Core M relevant primarily for tablets. The chips are not designed for use in full-fledged PCs, Cummins said.
“If you are interested in the highest-performing parts, Core M probably isn’t the exact right choice. But if you are interested in that mix of tablet form factor, detachable/superthin form factor, this is where the Core M comes into play,” Cummins said.
For full-fledged laptops, users could opt for the upcoming fifth-generation Core processor, also based on Broadwell, Cummins said. Those chips are faster and will draw 15 watts of power or more, and be in laptops and desktops early next year.
New features in Core M curbed power consumption, and Intel is claiming performance gains compared to chips based on the older Haswell architecture. Tablets could offer around two more hours of battery life with Core M.
You’re sitting at home, watching one of the major E3 presentations. A brand-new AAA video game has just been revealed and the teaser trailer actually makes it look pretty hot. You’re halfway through watching the trailer, interest piqued, and now you’re wondering, “When’s this coming out?” Now you see it; it’s slated for the holiday season… of the following year. You’re going to be waiting a solid 18 months, and that’s assuming the project doesn’t encounter delays.
Such is the way of the modern AAA console and PC business, but it wasn’t always like this. While the industry never really saw Apple-like announcements when you could practically buy the product immediately after, recent history shows that game announcements used to happen more regularly around six months prior to shipping.
“Back in the PS2 days…if it was shipping in the fall, you usually would see it for the first time at E3. That’s if everything went according to plan. The running joke was if you saw it for two E3s, development was a problem,” noted industry veteran and consultant Christian Svensson.
So what happened? With the success of the PS2 and the continued boom in the industry, retail became increasingly more important, and pre-orders started driving everything. And naturally, more time before release meant more time for marketing and more time to drive pre-sales.
“Around the time that Xbox 360 and PS3 came to market, the investments and risks were so high you had to do everything you can to build awareness earlier,” Svensson said. “You had to build in more beats for your PR earlier, you had more shows to attend to drive hands-on and media exposure, and all of that was ultimately in the name of driving up your pre-order numbers… everyone was trying to lock down the day one consumer. That drove all of that mania where you had to announce 18 months to two years out.”
While pre-orders were a primary factor in the ever-lengthening lead time to a launch, there were other factors as well. Svensson pointed out that companies have always worried about early leaks twisting their messaging. “If we announced it first, at least we controlled the message. Announcing it early lets you prep all of your partners earlier without fear that there are leaks out there,” he said.
Beyond that, development cycles on big budget titles just grew longer and longer. Announcing earlier enabled teams to adequately judge and react to feedback.
Warren Spector (Deus Ex, Epic Mickey), Director of the Denius-Sams Gaming Academy at the University of Texas at Austin, remarked, “Talking about a game early is a double-edged sword, no doubt about it. On the one hand, it can lead to unrealistic expectations about ‘promised’ features that ultimately fail to make the shipping game (as inevitably happens). And there’s no doubt, public clamor can amp up the pressure on a team On the flip side, seeing public excitement about what you’re doing can get a team ’psyched and cranking’ as we used to say. It’s nice when people express enthusiasm for what you’re doing. Also, early reveals can help you gauge public opinion, which can be useful in weeding out undesirable features as well as ones you might want to focus on more. Early reveals cut both ways.”
Dominic Matthews, product development manager for Ninja Theory, added, “The risk with announcing too early is that you make a first impression that is very, very hard to change. You can say as many times as you like that the game is very early in development, or this isn’t finished or is work in progress, but players understandably don’t hear it. They just see what you’re showing and take it as representative of the finished game. Personally, I would have kept all of the games I worked on under wraps for longer.”
That said, Matthews acknowledges that most developers are very excited to be able to discuss their projects usually. “It’s actually a really positive thing for a developer to be able to share their work outside of the studio. The announcement of the game allows everyone in the team to be able to share what they are doing with friends, family and industry peers. It can be frustrating having to say ‘I’m working on something really cool, but I just can’t talk about it yet’,” he said.
There’s also the very tangible benefit that by announcing earlier, teams should have an easier time adding talent to make a project go more smoothly.
Gearbox Software boss Randy Pitchford commented, “It’s not merely about attracting future customers, but communicating about the effort to the industry itself. When your in-development project is known, some activities including recruiting or attracting business partners or other activities becomes much easier than when you’re silent under the radar.”
Svensson agreed: “[If] you’ve created some assets, you think you know what you’re going to build, but you still need some very key roles to be filled and/or just body count to do the work, when it’s known that a particular studio is working on that franchise then recruitment becomes an easier task than, ‘hey we’d like to call you in but we can’t tell you what we’re working on’.”
Of course, there’s another benefit to announcing early that some developers would be very keen on: once a project is revealed there’s a better chance it won’t be canceled. “One of the things people forget is that not every game put in development always ships. A reason a lot of teams would want to announce earlier is that it’s harder to kill a product that’s been announced because it’s very public and for it to not come out after it’s been announced is a difficult thing for a company to suffer. It raises questions about if the company knows what it’s doing,” pointed out Svensson.
Once the announcement gets out there, the pressure definitely ramps up on a development team. But that’s not necessarily a terrible thing. After all, it takes an intense amount of pressure to create a diamond.
“Sometimes pressure is a good thing on the development process,” said Pitchford. “The best amongst us game makers exist to try to entertain people and whenever we have a deadline we work crazy hard to do the best job we can as we know that once the deadline is up, there’s no more time to do any better.”
“In my experience a lot of that magic that just sort of works out is the result of trying to adapt to some kind pressure on the situation. It often turns out that the pressure forces some of these things to happen that ultimately make games not only better, but shippable. The point is that while pressure always feels stressful, there are often a lot of positive aspects to pressure from a development point of view.”
Pitchford also noted that some of that pressure should be alleviated by a good publisher: “I think the only really negative consequence is about expectation management and that’s where the best publishers are really worth their value. The best publishers have a knack for managing customer expectations positively while projects unfold during the development and marketing phases of a project and that’s where you get the best feelings and results from a project.”
So if you’re planning a big budget game right now, when’s the right time to announce? How much lead time do you really need?
“I think it varies from product to product as far as what’s appropriate. An enormous AAA game that is new IP aimed at a monster retail release, a longer lead time, certainly north of a year, is still warranted,” advised Svensson. “When you start to get into north of 18 months, you get diminishing returns, even on something like that… When people have short attention spans, it’s hard to stay on people’s radar at a high level. I think the industry went too far for a period of time on that front and I think the economics of it are changing.”
Pitchford agrees that if you’re looking to sell something new, having that extra lead time is beneficial. “I’ve worked on games that have gone a long time in silence before being announced and I’ve worked on games that have had public announcements that were way too early. I think both approaches can be made to work, but both also bring their own set of challenges. My preference on which way to go depends on the game. The more inventive the game is and the more education required to communicate what is being promised, the more time is useful to master that communication before going wide,” he said.
It’s a fluid process, however, and the marketing teams have to be ready to adapt. Pitchford continued, “Part of the value of the early marketing campaign is to actually learn how to market the title to a wider audience. You’ll notice if you look at campaigns from start to finish that everything from logo designs to key messaging points to front-of-box and key art content evolves and iterates over the course of a project. This is a very tangible manifestation of the marketing team actually learning how to sell the thing they are selling through a careful process of testing and iterating.”
While early reveals can certainly be beneficial for both the marketing side and development side, it’s clear that the digital revolution is having an impact, noted Ninja Theory’s Matthews.
“I think the transition into digital gaming will shorten the window between announcement and release. There won’t be such pressure to drive pre-orders as there is in the retail space,” he said.
Another wrinkle in the digital space is the rise of self-publishing. Under that scenario, announcing earlier remains quite valuable.
“Ordinarily I would say that you should wait to announce as long as you can to make sure you have the best possible assets to make a first impression with: An amazing trailer or a rock-solid gameplay demo. Having said that, we’ve just announced our new game Hellblade at the very beginning of development – in other words incredibly early. We’ve done this because we’re self-publishing and actually want to build a community behind the game by sharing the development process,” Matthews continued. “By announcing now, we can share development right from the start. If we waited, we’d be retrospectively looking back at development which would feel less real, less here and now. This type of approach, or funding a game through crowdfunding, or Steam Greenlight might result in more games actually being announced even earlier.”
“The digital share of sales is climbing up and the need for that pre-order drive is slipping a little bit in the sense that you don’t have to have this crescendo to launch to necessarily find success with the right product, especially when you have live teams creating content post-launch; it’s not the put everything in the box and ship it mentality anymore,” he explained. “It is the, ‘hey we’re going to create a minimum viable product (MVP) and we’re going to bring it to market and support it’ … In some cases you might not even really ramp the marketing until you feel you’ve got a good product to promote.
“To some degree, I think the pressure to announce early across the industry as a whole is being reduced because of the proliferation of digital, the adoption of games as service, and quite frankly, the other part of it is it’s really fucking expensive to have an 18-month or two-year marketing cycle for a game. It’s really hard to do, and not every game has the right kind of content to support that longevity. You can’t go dark, otherwise you lose people’s attention, you have to have a consistent set of beats all the way through from announcement to launch, otherwise why announce early? You’ve lost that benefit. It’s hard on production teams because they have to create assets to support these beats, it’s hard on marketing teams because it’s a long, hard slog.”
And with the rise of indies and smaller games published on platforms like Xbox Live and PlayStation Network, huge lead times make even less sense. For smaller digital projects, three months might be more than enough time to spread the word.
“One of the things we’ve learned doing digital products, announcing more than three months out to build awareness just really doesn’t make a lot of sense. A lot of those titles are smaller, they don’t necessarily have a lot of features to drive a six-month or nine-month campaign… They’re focused. The level of touch is very high in a short period, and I’d love to see the business get back to a lot more of that,” Svensson said.
“What I do think we’re going to see is a lot of normalization again for the average product probably around six to nine months again, kind of where we were in ’99 and 2000. And I don’t think that’s bad.”
AMD has explained that its new FreeSync technology will only work in new silicon.
FreeSync is AMD’s initiative to enable variable-refresh display technology for smoother in-game animation and was supposed to give Nvidia’s G-Sync technology a good kicking.
G-Sync has already resulted in some top production gaming monitors like the Asus ROG Swift PG278Q.
However AMD said that the only the newest GPU silicon from AMD will support FreeSync displays. Specifically, the Hawaii GPU that drives the Radeon R9 290 and 290X will be compatible with FreeSync monitors, as will the Tonga GPU in the Radeon R9 285.
The Bonaire chip that powers the Radeon R7 260X and HD 7790 cards could support FreeSync, but that is not certain yet.
Now that would be OK if the current Radeon lineup is populated by a mix of newer and older GPU technology. What AMD is saying is that there are some brand-new graphics cards selling today that will not support FreeSync monitors when they arrive.
The list of products that won’t work with FreeSync includes anything based on the older revision of the GCN architecture used in chips like Tahiti and Pitcairn.
So if you have splashed out on the the Radeon R9 280, 280X, 270, and 270X hoping that it will be FreeSync-capable you will be out of luck. Nor will any older Radeons in the HD 7000 and 8000 series.
Nvidia’s G-Sync works with GeForce graphics cards based on the Kepler architecture, which include a broad swath of current and past products dating back to the GeForce GTX 600 series.
The tablet, which runs on Google’s Android 4.4 OS, has Intel’s quad-core Atom chip, code-named Bay Trail. The chip is capable of running PC-class applications and rendering high-definition video.
The 8-inch S8 offers 1920 x 1200-pixel resolution, which is also on Google’s 7-inch Nexus 7. The S8 is priced lower than the Nexus 7, which sells for $229.
The Tab S8 is 7.87 millimeters thick, weighs 294 grams, and runs for seven hours on a single battery charge. It has a 1.6-megapixel front camera and 8-megapixel back camera. Other features include 16GB of storage, Wi-Fi and Bluetooth. LTE is optional.
The Tab S8 will ship in multiple countries. Most of Lenovo’s tablets worldwide with screen sizes under 10 inches run on Android.
Lenovo also announced its largest gaming laptop. The Y70 Touch has a 17.3-inch touchscreen, and can be configured with Intel’s Core i7 processors and Nvidia’s GTX-860M graphics card. It is 25.9 millimeters thick and is priced starting at $1,299. It will begin shipping next month.
The company also announced Erazer X315 gaming desktop with Advanced Micro Devices processors code-named Kaveri. It can be configured with up to 32GB of DDR3 DRAM and 4TB of hard drive storage or 2TB of hybrid solid-state/hard drive storage. It will ship in November in the U.S. with prices starting at $599.
The products were announced ahead of the IFA trade show in Berlin. Lenovo is holding a press conference at IFA where it is expected to announce more products.