Why You Might Reconsider Your Password-less Smartphone

first_imgI’ll admit it – I can be lazy when it comes to security. I don’t have a passcode on my iPhone because it never leaves my pocket and I’m not one to misplace things. An article in Ars Technica has me rethinking my passé attitude toward passcodes and privacy, however. According to the article, a recent decision holds that “police officers may lawfully search mobile phones found on arrested individuals’ persons without first obtaining a search warrant.”The decision (.pdf) by a California Supreme Court allows that police officers can “conduct a warrantless search of the text message folder of a cell phone they take from [one’s] person after the arrest.”The solution to this, according to Ars Technica’s Ryan Radia, is not simple, but setting a password is a good first step:While the search incident to arrest exception gives police free rein to search and seize mobile phones found on arrestees’ persons, police generally cannot lawfully compel suspects to disclose or enter their mobile phone passwords. That’s because the Fifth Amendment’s protection against self-incrimination bars the government from compelling an individual to divulge any information or engage in any action considered to be “testimonial”.Password protection, however, is just the beginning,says Radia. “If you care about your privacy, password-protecting your smartphone should be a no-brainer,” he writes. “Better yet, you should ensure your smartphone supports a secure implementation of full-disk encryption.”For the full, in-depth discussion of the topic, definitely give Radia’s article on Ars Technica a read.  Tags:#Government#web mike melanson 8 Best WordPress Hosting Solutions on the Market Top Reasons to Go With Managed WordPress Hostingcenter_img A Web Developer’s New Best Friend is the AI Wai… Related Posts Why Tech Companies Need Simpler Terms of Servic…last_img read more

Read More →

A Bursting Market: Cisco Building APIs for Cloud Infrastructure Automation

first_imgWith This One Question, You’ll Never Need an Ic… This all adds up to a significant trend we will see in the server market, data centers and the hosting world.Companies such as Cisco and HP are competing to provide the next generation networking technologies and infrastructure for the new, modern data centers. Intel is seeking to partner with these companies to build reference architectures that provide the capabilities to manage workloads and overall server management within these environments.This is why the cloud computing market is expected to be so significant. We’re looking at new architectures for automation and server networks with new requirements. The market is only beginning to burst. Cisco’s cloud computing strategy is starting to accelerate, with a focus on providing infrastructure that makes it easier to get started and expand quickly. Cisco’s added emphasis on the cloud also highlights Intel, which is developing a cloud building program. The goal of the program is to build reference architectures with its partners, including Cisco, Hewlett-Packard, Dell, Enomaly, Canonical, Joyent IBM and China’s Huawei and PowerLeader. Citrix, Microsoft, Red Hat, Parallels and VMWare are also part of the program.Cisco highlighted what it presented at a recent Intel Cloud Builder event. In a blog post, Cisco’s Brian Gracely says the company is following the concept of the “now,” meaning its emphasis is on helping the customer get from the great idea to the implementation of the concept as quickly as possible.A core aspect of this approach is the Cisco UCS API, designed to provide granular automation into compute, network and storage I/O.All of this automation and management functionality extends across multiple UCS chassis. But one thing that many people don’t know is that the Cisco UCS Manager (UCSM), which ships with the UCS (via the Fabric Interconnect), is a Java implementation of the UCS API. So everything that can be done via the UCSM GUI can also be done by 3rd-party tools. Think about that for a second. That means that IT operations teams could get started with UCS + UCSM now to get familiar with the product and begin building Cloud Computing infrastructure today. Over time, they can begin automating all of this functionality without having to change any operational processes. Putting the power of “now” into their hands at the pace that best aligns their IT skills and business needs.Here’s the deck Cisco presented:Cisco – Intel Day in the Clouds – March 2011Cisco’s data center architecture is now being used by OpSource, which announced new security features yesterday. ZDNet’s Phil Wainewright:What caught my interest was the infrastructure underlying this offering. It’s an all-Cisco hardware platform, part of the vendor’s Data Center Business Advantage architecture. So all the firewall, VPN and load balancing surrounding the customer’s server instances, as well as the instances themselves, are running on the same, shared, physical internal bus. The principal advantage of this is that it removes the latency inherent in other cloud platforms where servers are connected at LAN or WAN speeds. “We offer sub-millisecond latency speeds,” OpSource’s CMO Keao Caindec told me in a briefing last week. Whether tying a cloud infrastructure to a specific hardware architecture is truly in the spirit of cloud computing is a discussion I’d like to defer to a separate blog post; there are a number of angles to consider, and it’s not as clear-cut as it might seem. One argument in favor is the other advantage that OpSource exploits, which is the ability to offer customized configuration and governance of a customer’s cloud instances within the shared infrastructure.Intel will focus on building out reference architectures and developing ways to better manage servers in a dynamic environment. This includes “cloud bursting,” a term for how servers respond to changing work loads. Intel is doing all of this in part to gain a greater share of the overall server market. According to an EETimes story:Intel hopes to reap compound growth over the next few years of as much as 20 percent in its sales of silicon for servers and other infrastructure gear, said Jason Waxman, general manager of Intel’s data center group. International Data Corp. said worldwide server revenue increased 11.4 percent to $48.1 billion in 2010, while unit shipments increased 15.3 percent to 7.6 million units. alex williams Related Posts Top 5 Areas Where Companies Want IoT Solutions 6 Best Video Conferencing Services for Small Bu… How Connected Communities Can Bolster Your Busi… Tags:#cloud#cloud computing last_img read more

Read More →

Are Heat Pumps Green?

first_imgNot If the Electricity that Powers the Pump is Generated by Burning CoalAn article in the Home and Garden section of the New York Times titled “Time to Worry About Heat Bills,” by Jay Romano, talks about a winter heating option that will save you money: electric heat pumps. With the price of gas and oil skyrocketing, the article reasons, an electric heat pump will be a money saver this winter and eventually will end up “paying for itself.”But is it green? Not if you’re getting your electricity from the grid. Coal plants have an efficiency of about 31%; put another way, almost 70% of the energy contained in a lump of coal is lost as heat when it’s burned at a coal plant.And along with that heat, tons of carbon dioxide are dumped into the atmosphere when the lump of coal is burned.And strip mining for lumps of coal leaves a mighty big footprint on the land.I sure wish I could convince Henry Gifford to let me publish his manuscript on why heat pumps are not such a great idea.—Dan Morrison is managing editor of GreenBuildingAdvisor.com.last_img read more

Read More →

Funds Approved to Help Put London in the Green

first_imgOver the past two years the U.K. has adopted some relatively progressive policies to improve the energy efficiency of its housing stock and commercial buildings.Wales set strict requirements for energy efficiency, water consumption, and use of sustainable materials. Britain’s Climate Change Act, passed into law in November 2008, requires that, beginning in 2016, new residential construction meet net-zero-energy standards. The British government approved plans for four “eco-towns,” and the UK Green Building Council, a government advisory group, suggested this week that the government seriously consider allowing each of Britain’s 7 million homeowners to borrow up to $17,000 for green retrofits and have the loan amount added to the homeowner’s local tax bill.And now the city of London, which is preparing to play host to the Olympic Games and Paralympics Games in 2012, has taken an additional step to try to offset the inefficiencies of the homes in its 33 boroughs, about 60% of which were built before 1945. Through a $16 million initiative announced last week, the city will provide households with a number of free services, such as changeovers to energy efficient light bulb and light switches. The initiative also will subsidize more-costly, weatherizing improvements – such as the installation of wall and attic insulation – for homeowners able to pay for them, and to make those improvements available for free to low-income homeowners.Another tool for emissions controlThe London Development Agency, which oversees infrastructure maintenance, employment, and business development for the city, developed the plan. It will administer the initiative in collaboration with London’s mayor, Boris Johnson, and other city agencies.One key goal of the plan – among the largest of several designed to help Londoners trim energy usage – is to help reduce London’s carbon emissions by 60% by 2025, a target that is in line with the objectives of Britain’s Climate Change Act.“Climate change is one of the biggest issues facing London’s economy,” the London Development Agency’s chief executive, Sir Peter Rogers, said in a press release announcing the measure. “This new scheme aims to make real cuts in carbon dioxide emissions for a cost-effective rate per ton of saved carbon. We have learned that this is best achieved by targeting particular areas and offering residents easy measures to implement.”last_img read more

Read More →

EPA Looks at Fracking Risks to Water

first_imgFlying blindThe impact of the unconventional oil and gas boom on our water supply is not well understood, and the findings of the EPA report underscore just how much work remains to be done to fully comprehend the risks, the magnitude of impacts, and the best ways to manage the risks.Better and more accessible data on activities surrounding hydraulic fracturing operations is needed. There’s been some progress, and the EPA study is a step in the right direction in terms of better understanding this issue, but by no means are we out of the woods.Between 2000 and 2013, almost ten million Americans lived within one mile of a hydraulically fractured natural gas or oil well. They deserve as much information as they can get. The Environmental Defense Fund’s oil and gas team is poring over the lengthy report and will post further analysis of the report’s various pieces, so stay tuned. Mark Brownstein is a vice president in the climate and energy program at the Environmental Defense Fund, where this post was originally published. While industry advocates are touting the report as wholesale exoneration, newspapers including The New York Times and Washington Post recognized that activities related to hydraulic fracturing do, in fact, pose real pollution risks to drinking water. Although EPA didn’t find evidence of hydraulic fracturing activities causing widespread, systematic drinking water contamination, they did find many instances of localized impacts to water supply and water quality.Even in the limited scope of activity studied in the report, EPA also referenced hundreds of spills of hydraulic fracturing fluid and so-called “produced water” — the mixture of hydraulic fracturing fluid and salty water found naturally underground that comes back to the surface once the well is drilled — many of which EPA says resulted in contamination of water and soil. Just the tip of the icebergBecause of the huge size and massive scale of these oil and gas operations, the risks, however well managed, are genuine and numerous. Hydraulic fracturing itself is just one factor, and not even the biggest one. Other key issues include the ongoing physical integrity of the wells and the storage, transport, and disposal of some 800 billion gallons of wastewater generated annually by onshore oil and gas operations in the United States.Contamination risk associated with handling this wastewater is high, and the consequences can be dramatic. In many areas, this produced water is far saltier than sea water. It will kill plants, and can ruin the landscape for decades. It’s often laced with up to hundreds of toxic chemicals (antifreeze, to name just one).Gallon for gallon, in other words, a water spill could be even more dangerous for the environment than an oil spill.The potential for leaky underground injection wells to pollute water supplies, not evaluated in the EPA report, is another crucial pathway that is critical for regulators and industry to control. More than two billion gallons of produced water are disposed of in these wells every day.Another emerging disposal issue is how to protect water supplies when we know little about the environmental characteristics of the wastewater, particularly in situations where industry is given the go-ahead to discharge it into rivers. Serious data limitationsFirst, the report is a review of existing studies. EPA did almost no original scientific research or fieldwork. Nor does it include much in the way of actual water quality reading — or baseline, pre-drilling data by which to compare. EPA doesn’t use this data because, for the most part, it doesn’t exist.That’s a serious knowledge gap that needs to be filled. But in the meantime, it’s a mistake to think there are no problems just because they don’t turn up in the extremely limited data available. Indeed, EPA expressly acknowledges in the executive summary that “data limitations preclude a determination of the frequency of impacts with any certainty.” RELATED ARTICLES Will Natural Gas Be Our Domestic Energy Savior?Wastewater Disposal Linked to EarthquakesTaking Action on Climate ChangeDesigning for the FutureThe End of Peak Oil?Natural Gas — Not as Green as it Used to Be The Environmental Protection Agency (EPA) recently released its long-awaited draft report on impacts associated with hydraulic fracturing on drinking water, completing the most extensive scientific review of published data to date.At nearly 1,000 pages, it’s a substantial report. But it’s nowhere near a comprehensive evaluation — or even enumeration — of the risks that oil and gas development poses to both surface and ground water. The biggest issues aren’t what’s in the document, but what isn’t. For all its heft, the biggest lesson in the report is just how little we actually know about these critical risks.last_img read more

Read More →

The Low Cost of Passivhaus Living

first_imgWhy it cost so much to buildThe house has a net floor area of 1,940 square feet, making its $1.1 million construction budget far above averages for the area.But in a telephone conversation with GBA after the article was published, Eian said focusing on the relatively high cost of the house is missing the point: it was never designed as a model of economy for average buyers. And, Eian added, fixating on costs also overlooks an important reason for considering Passivhuas construction — environmental stewardship.“There’s nothing mainstream about it,” he said. Comparing Konkol’s house with lower cost energy-efficient designs in the area, which the article set out to do, “is like comparing the first moon shot to what a Shuttle space mission would be. It’s apples to oranges.”Among the features that added to construction costs:A 4.7-kilowatt photovoltaic array that includes rooftop panels and a dozen ground-mounted panels that can track the sun — a $73,000 feature, according to the newspaper.A solar thermal system for domestic hot water.Site development costs on the steep lot of $175,000.Interior walls coated with American Clay — an expensive and labor-intensive choice.A computer-managed electrical system, and a long list of change orders during construction.Enhanced building insulation and windows, resulting in R-70 walls, an R-60 slab, and a R-95 roof.“There was no economy in the budget,” Eian said. Gary Konkol built a state-of-the-art home in 2009, spending more than $1 million on what came to be called “The Passive House in the Woods” in Hudson Township, Wisconsin. It was not cheap, to be sure, but a look at five years worth of utility bills shows it doesn’t cost much to live there.An article in the Pioneer Press adds up the power bills in Konkol’s Passivhaus-certified home: From December 2010, when Konkol moved in, to August 2015, actual electricity use averaged $6.70 per month.Total utility charges were $2,056, but the lion’s share of that, $1,674, went to fixed meter fees from St. Croix Electric. The balance, $382, was for the electricity the house actually consumed.The house, described in several articles at GBA by Richard Defendorf as well as in a blog by architect Tim Eian, was at the forefront of energy-efficient and sustainable design at the time it was constructed.“We did it before it was commercially viable. … He went to where the leading edge clearly became the bleeding edge,” Eian told the Pioneer Press. “It is as energy-efficient, water-efficient, environmentally friendly as you could possibly make it in 2009. Inside and out.”It was the first house in Wisconsin and one of the first few in county to win Passivhaus certification and while construction was well documented, the cost of building and operating it had not been detailed. Energy performance could have been even betterThe three-bedroom house has an enviable energy record, but Eian says it could easily have been even better. For example, Konkol’s big fruit and vegetable garden uses a lot of water. Roughly one-sixth of the 6,000 kWh of electricity consumed annually goes into drawing water from the well on the property and filtering it. If the house had been built in an urban area with a public water supply, energy consumption would have been that much lower.The tracking system for the solar panels has failed more than once, lowering the amount of electricity the array was able to produce until repairs were made. Finally, the solar thermal system sprang a leak, forcing Konkol to use electrical resistance heat for domestic hot water until the system was repaired.Even with those energy setbacks, total energy consumption adds up to a little over $1 a day. One reason, Eian said, is the site had only so much solar potential to offer, and the house was designed around it. In other words, a house and its mechanical systems built around the expected output of its renewable energy systems. Another key is the owner’s living habits and interest in energy frugality — and there’s no question that Konkol watches what he uses very carefully.“Even though many people view my house as being extreme in its performance and design, I believe that there will be a day that my design and energy efficiency will be commonplace,” Konkol told the Pioneer Press. “When it’s got merit, it finds a way of becoming mainstream over time.”last_img read more

Read More →

Climate Change Resilience Could Save Trillions

first_imgIs your city prepared for climate change? The latest National Climate Assessment paints a grim future if U.S. cities and states don’t take serious action to reduce greenhouse gas emissions. The bottom line is that the costs of climate change could reach 10% of the entire U.S. economy by the end of the century — or more than $2 trillion a year — much of it in damage to infrastructure and private property from more intense storms and flooding.RELATED ARTICLESRebuilding America and the ‘New Normal’ of ResilienceUnfolding Community ResilienceResilient Design is a Money-MakerPower Outages Grew Longer in 2017Hoping for a Climate Change Breakthrough Cities can greatly reduce the damage and costs through adaptive measures, such as building seawalls and reinforcing infrastructure. The problem is that such projects are expensive, and finding ways to fund the cost of protecting cities against future and uncertain threats is a major financial and political challenge — especially in places where taxpayers have not yet experienced a disaster. I’ve been part of a team that has been evaluating options for protecting Boston, one of America’s most vulnerable coastal cities. Our analysis offers a few lessons for other cities as they begin planning for tomorrow’s climate. Investing in adaptation A team of scientists from 13 federal agencies contributed to the fourth U.S. National Climate Assessment, which recently laid out the stark threats Americans face from sea level rise, more frequent and intense storms, extreme precipitation, and droughts and wildfires. For example, the report notes that coastal zone counties account for nearly half of the nation’s population and economic activity, and that cumulative damage to property in those areas could reach $3.5 trillion by 2060. The good news is that investing in adaptation can be highly cost-effective. The National Climate Assessment estimates that such measures could significantly reduce the cumulative damage to coastal property to about $800 billion instead of $3.5 trillion. The report does not, however, examine the complex problems of implementing these adaptation solutions. The adaptation devil is in the details The Sustainable Solutions Lab at the University of Massachusetts Boston has been closely involved with its host city and local business and civic leaders in devising such climate adaptation strategies and figuring out how best to implement them, including a study I led on financing investments in climate resilience. Our work identified a series of hurdles that make financing such projects difficult. One key problem is that while public authorities — and taxpayers — will ultimately bear the cost burden of coastal protection, the benefits mostly accrue to private property owners. Higher property taxes or new “resilience fees” will be on the table – and unlikely to be politically popular. Seawalls can help prevent damage to coastal properties, but they generate no new revenue that would underwrite the cost of construction. (Photo: Byronv2 / CC BY-NC / Flickr) Another problem is that resilience investments primarily prevent or reduce future damages and costs but don’t create much new value, unlike other public investments such as toll roads and bridges. For example, an investment in a sea wall might prevent property prices for coastal homes from falling or insurance premiums from rising, but it won’t generate any new cash flows to defray the costs for the city or homeowner. Beware the big fix In a separate study, we examined the feasibility of building a four-mile barrier across Boston Harbor with massive gates that would close if major storms threatened to flood the city. We estimated that the project would cost at least $12 billion and could take 30 years to plan, design, finance, and build. Ultimately we concluded it was unlikely to be cost-effective and urged city officials to abandon the idea. One key problem is the uncertainty regarding the extent and pace of sea-level rise, which is forecast to reach anywhere from 2 to 8 feet by the end of the century. But we really don’t know. By the time the barrier would become operational mid-century, we might realize that we didn’t need it — or worse, that it is woefully inadequate. As sea levels rise, the gates, which would be the largest of their kind in the world and take many hours to open or close, would need to be activated more frequently and could potentially fail. In addition, the cost of such a barrier would be difficult to finance in an era of growing federal deficits and would choke off capital required for other more urgent adaptation projects. In other words, it’s risky to put all our adaptation eggs in one very expensive basket. The incremental solution Instead, our group recommends that Boston and other cities pursue more incremental shoreline protection projects focused on the most vulnerable areas. Examples include constructing seawalls and berms, elevating some roads and parks and creating incentives for property owners to protect their buildings. The key attraction of such an approach is that capital can be targeted in highly cost-effective ways to the most vulnerable areas that need protection in the short term. It also allows for more flexible planning as the science improves and climate impacts come into sharper focus. Boston is already considering some projects like this that would cost around $2 billion to $2.5 billion over a decade or two. Coming up with that much money is still a big challenge, but it’s far more cost-effective than the harbor barrier. Another benefit is that this neighborhood-level approach would facilitate more local economic development and community participation. While making these areas more resilient, such investments would also involve upgrades in housing, transportation, and other infrastructure. This would go a long way toward ensuring that the community and taxpayers are on board when the discussion turns to costs. Fair and equitable Adapting to climate change will be a mammoth challenge for cities and citizens across the country — and the world. Finding ways to finance adaptation in a fair and equitable way will be a prerequisite to success. Miami, for example, last year issued a voter-approved $400 million bond to pay for about half its planned resilience projects. In August — exactly a year after their region was devastated by Hurricane Harvey — most voters in Harris County, Texas, approved a $2.5 billion bond to pay for flood protection. And just last month, citizens in San Francisco approved a $425 million bond to pay a quarter of the costs of fortifying a sea wall. One problem with these projects is the heavy reliance on bonds. We found that it would be better to spread the costs of protecting cities and towns across multiple levels of government and private sources of capital, and utilize a range of funding mechanisms, including property taxes, carbon-based fees, and district-level charges. The hope is that voters and cities will approve such projects before disaster strikes — not after.   David Levy is professor of management and director of the Center for Sustainable Enterprise and Regional Competitiveness at the University of Massachusetts Boston. This post originally appeared at The Conversation.last_img read more

Read More →

Google Making Fashion Week Documentary Using “Glasses”

first_img8 Best WordPress Hosting Solutions on the Market Some of the most beautiful women on the planet have geeked out hard this week, donning Google Glasses on the catwalk for New York’s Mercedes-Benz Fashion Week.Designer Diane von Furstenberg’s line teamed up with Google to unveil a futuristic collaboration, merging fashion and technology. Models have been paired with colored Glasses that match their outfits. Google will edit the footage recorded by each of the Glasses to create a documentary shot from each of the models’ perspectives. Look for it on the Google Glass Google Plus channel this week.“Beauty, style and comfort are as important to Glass as the latest technology,” Sergey Brin said. The Google co-founder joined von Furstenberg in the front row of the show, each sporting a pair. Wearable fashion technology (or is it tech fashion?) is an eye-grabber, and with Google planning to sell the Glasses in retail locations next year, this was a savvy PR move. The first Glass release, the “Explorer Edition,” is reportedly priced at $1,500, but that’s not stopping the buzz.If nothing else, it’s a big personal move for Brin, whose label faves begin and end with Crocs.  Top Reasons to Go With Managed WordPress Hosting Why Tech Companies Need Simpler Terms of Servic… A Web Developer’s New Best Friend is the AI Wai…center_img Tags:#Google#web Related Posts adam popesculast_img read more

Read More →

TVSync’s Open Platform Weds Social TV & E-Commerce

first_imgTags:#television#web 9 Books That Make Perfect Gifts for Industry Ex… adam popescu 12 Unique Gifts for the Hard-to-Shop-for People… Related Posts center_img 5 Outdoor Activities for Beating Office Burnout 4 Keys to a Kid-Safe App Social TV has a new player that’s worth watching. Last month, Vobile launched TVSync, an open platform that introduces new broadcast and cable TV streaming options. TVSync could finally do what predecessors dreamed of doing: mesh social media, streamed entertainment and curated content across multiple screens.Big money is at stake. The U.S. media and entertainment industry is expected to spend $3.58 billion in digital ads this year and an estimated $6.19 billion by 2016, according to a September report from the digital marketing and media research site eMarketer.With a slew of companies in the field, including GetGlue, Echo and even Apple, it may be crowded. But none of the players in the space marry both the social and e-commerce sides of the business. This opens the door for TVSync.The month-old TVSync will mesh content, e-commerce and social networks across four screens – desktop, tablet, smartphone, smartTV – all in real time. TVSync could provide instant polling on reality shows and news events and embed live social-media activity in the show feed, or even into the storyline. TVSync does all this through a white-label content-identification system that processesmore than 2 million videos and 8,000 hours of audiovisual content per day, according to Yangbin Wang, CEO of Vobile. That may be enough to service large traditional media companies with major data caches. The company’s pricing is designed to allow large media companies and smaller, upstart broadcasters and individuals to play in the space. (The price is determined by number and density of calls to the application programming interface per month and the size and length of audiovisual files available for matchmaking.) “If you are a startup creating an app with a small installed base, then the cost would only be a fraction of that charged to a large media outlet with thousands of hours of cataloged content and millions of users,” Wang explained.Linking Up ContentWhile rivals GetGlue, Echo and Shazam specialize in curating conversations and enabling content discovery, they haven’t leveraged that activity into sales. This is where TVSync is doing something different. The company has developed a smartphone innovation that makes purchases as easy as pointing and clicking. Wang says consumers will be able to buy a team jersey during a game or merchandise hawked by TV show stars just by pointing their smartphone at the screen and holding the phone like they’re taking a picture. The service recognizes the content automatically, similar to IntoNow.In a little less than a month, there have been more than 300 requests for access to the platform, which is compatible with iOS and Android. To get going, interested developers receive a software development kit that Yang says “includes everything developers need to utitlize the TVSync platform.”  If TVSync can get enough developers and traditional media organizations to adopt their service, build on top of it, and get major advertisers onboard, this new system could potentially become the dominant force in the emerging interactive TV space. Vobile, founded in 2005, has a history in Hollywood, helping the MPAA protect films from piracy. But it’s unclear whether the company will be able to use that background to secure future contracts. Wang says he isn’t betting everything on TV. He wants to provide a new medium for Fortune 500 corporate training and content management. He says an editor using his platform can search for audiovisual content in a large digital library much faster than by searching a conventional content-management system.Media Evolution Or . . . ?Marketers have been clamoring for TV-driven e-commerce since the early days of the Internet. Could TVSync make it happen?“It’s clearly a step in the right direction [for app developers],” said Brian Norgard, the co-founder and chief executive of Chill, a social video discovery site with 19 million registered users and a recent $8 million round of funding.Norgard thinks TVSync could catch on, and cites the open-platform approach as a major reason. He says opening traditional and cable broadcasting television’s historically closed platform could lead to innovation within the medium.“I think what they’re saying is, let’s skip building the consumer app layer and let everyone build on top,” Norgard explained. “The technology they’re providing opens up a lot of possiblites and makes a closed platform less closed.”Brian Steinberg, TV editor for Ad Age, remains unconvinced that social TV has enough of a future to make this kind of service successful longterm. He thinks the field can’t sustain the growing demand for return on investment in ad revenue. In a crowded field, Steinberg says, TVSync will face a challenge in winning over content creators.“Ultimately I think social TV is in a bit of a bubble right now,” Steinberg said. “It’s not quite clear to me that the audience they attract watch the shows and take part of what advertisers want to do.”This year Twitter spent almost $260 million on ads in the U.S., mostly on big television shows and news events, like the Video Music Awards, Grammys, Emmys and the Super Bowl. It hoped to capitalize on content with heavy engagement, like AMC’s Breaking Bad, said Clark Fredricksen, vice president of communications for eMarketer.Fredricksen sees TVSync less as a social platform and more as an e-commerce tool. “The social component seems pretty small,” he said. “What it really sounds like is an open API that allows a publisher or a studio to integrate on their side. Then the TV viewer would integrate on their site.”The digital marketing strategist says the area is hot but crowded. The draw for TVSync could be the e-commerce function.“Marketers are very interested in reaching people from that second screen,” Fredricksen said. “The prospect of adding a commerce component to that already engaging environment is very compelling.”The possibilities are truly limited to what marketers come up with and the deals TVSync can broker. Right now the door is open to the potential for broadcasters, cable companies, publishers and entrepreneurs to synchronize video or audio content with connected TV and second-screen experiences. Stay tuned. last_img read more

Read More →

Big Data Is Creating Big Job Demand

first_imgbrian proffitt IT + Project Management: A Love Affair Massive Non-Desk Workforce is an Opportunity fo… Related Posts Cognitive Automation is the Immediate Future of…center_img Tags:#enterprise#Finance Programming and development abilities top many employers’ most-sought-after-skills lists, as big data and mobile-platform development jack up demand to new levels.Wall Street firms, for example, are searching hard for programmers with a side of database skills, according to employment recruiter eFinancialCareers, which specializes in financial gigs. When the site posted its top 10 skill searches for the summer of 2012, programming languages and databases were at the top “by a wide margin,” a company statement reported.It’s also evident what’s driving a large part of this aggressive searching: big data. That’s because C and Java programming skills were the top specific skills sought for data applications that need C’s speed of execution and data engines like Hadoop that are all about the Java.The next most-sought skill? SQL, the database query language that’s still very pervasive in relational databases and even in some of the non-relational databases that are such hot properties in big-data land these days.“The next four skill sets on Wall Street’s ‘buy’ list are fixed-income, risk, project-management and business analysis. Technology pervades these jobs as well,” an eFinancialCareers representative said.Consider job listings such as the one for an investment bank and securities firm looking for a fixed-income quantitative analyst who’s well-versed in matrix-oriented programming languages such as MATLAB, R, Python, or GAUSS, and who has a working knowledge of Ruby, VBA, SQL, and database programming. The meshing of technology and business skills is a big “get” for most businesses, as any geek who can speak numbers or any suit who can grok tech is highly sought candidates.Programming and development is also the big target for the technology sector. Tech job site Dice.com recently released its top-skill requests for the year as of October 1, and software development topped its list, too. Quality assurance came in second, followed by Python, SOAP and virtualization skills, respectively.“Software development is beyond compare in today’s tech-job market. Even if you are not an engineer –- many hiring managers want candidates to have a thorough understanding of the software development lifecycle. More development equals more QA or ensuring a project, product or service meets certain standards and satisfies requirements,” wrote Managing Director Alice Hill on the Dice blog.High finance is not the only sector hiring tech workers. Last month General Motors announced it would hire up to 10,000 IT workers globally, kicking it off with 500 new IT slots in its Austin “innovation center.”Yesterday, GM followed up on that, announcing a new innovation center in the Detroit suburb of Warren, Mich., which will need 1,500 IT staffers. The company plans to open two more such tech centers in the United States soon, spreading the wealth, as it were, to make up for some regional shortages in programmers and developers.Programming has been at the top of the career skills lists for quite a while, and there are no signs of this demand abating any time soon. Between big data and mobile-application demand alone, those who code well should have more employment opportunities for some time to come.Image Courtesy of Shutterstock 3 Areas of Your Business that Need Tech Nowlast_img read more

Read More →

The iPad Mini’s Killer Feature = Price

first_imgWhat it Takes to Build a Highly Secure FinTech … The Rise and Rise of Mobile Payment Technology dan rowinski Tags:#Apple#iPad#mobile If as now seems inevitable, Apple is truly going to announce an iPad “Mini” next week, the price point in relation to the competition is going to be important. The lowest Apple could conceivably go would be to match the $199 price of the Nexus 7 and Kindle Fire. Rumors say that the starting price for the Mini will either start at $250 or $299, which some analysts think is too high if Apple truly wants to compete in the smaller form factor tablet market.If Apple can price the Mini competitively, it could create a perfect storm for market domination. According to comScore, in addition to price, the other top consideration for tablet buyers is what kinds of apps are available. As of July this year there were about 250,000 iPad-specific apps. Exactly how those translate to the supposed 7.85-inch dimensions of the Mini remains to be seen, as does how the Mini will use iPhone apps . But any way you slice it, Apple’s iOS has the most tablet-specific apps of any of the mobile operating systems. Why IoT Apps are Eating Device Interfacescenter_img Related Posts Role of Mobile App Analytics In-App Engagement Consumers looking for a tablet computer this holiday season will have plenty of great choices. Microsoft has its Surface RT tablets ready for pre-order, there are new Kindle Fires from Amazon, and Google’s Nexus 7 tablet leads a wide variety of quality Android options. And the big dog of the tablet market is about to enter the fray yet again, as Apple seems about to unleash a brand new – and smaller – iPad to the market. Which ones will consumers flock to? Apart from the already-successful full-size iPad, the answer, unsurprisingly, will likely have a lot to do with price.All Tablets Are TweenersThe tablet market is different from that of other gadgets. Smartphones, for the most part, vary little on price and typically range between free and $199 (on a carrier contract). Consumers seem willing to pay top dollar for a computer they think they absolutely need to be productive. Tablets are different while many people believe they need a mobile phone and a computer to meet their personal and business goals, a tablet is more of a “not necessary, but nice to have” type of device. The reality of the tablet as a tweener is what makes price so important in the purchase decision. Apple set themarket standard with the original iPad starting at $499. Every other tablet to hit the market since has had,  to react to that price point in one way or another. Some competitors failed, such as Motorola and Samsung, by pricing the Xoom and Tab 10.1 slates too high. Others, such as Amazon’s Kindle Fire and Google’s Nexus 7, competed by shrinking the screen size and pricing aggressively at $199. When it comes to battling the iPad, the competition cannot bet on superior hardware or user experience because, rightly or wrongly, Apple is perceived to be the clear leader on those fronts.Price Is The KeyAnalytics company comScore, quarterly TabLens research bears this out. Near 46.3% of iPad owners in comScore’s survey make $100,000 or more. In comparison, only 32.5% of Android tablet owners  and 33.3% of Kindle Fire owners make that much. (Note, the data is from August, before the release of the newest generation of Kindle Fires.) 40.4% of Android tablet owners and 42.2% of Kindle Fire owners make between $25,000 and $75,000. Only 31.6% of iPad owners fall within that income range. See the chart below for more detail: Then, of course, there is also the iPuppies Effect, where millions of people seem willing to buy anything and everything that Apple makes – just because Apple makes it. The iPad Mini should sell well on the consumer market for that reason alone. How low does Apple have to go to make a killing with the iPad Mini? Share your predictions in the comments.last_img read more

Read More →

Verizon Fell Behind AT&T In Q4 With 9.8 Million Smartphone Sales

first_imgRole of Mobile App Analytics In-App Engagement dan rowinski Tags:#Android#AT&T#earnings#iPhone#verizon Related Posts What it Takes to Build a Highly Secure FinTech …center_img Why IoT Apps are Eating Device Interfaces In a note to the Securities and Exchange Commission, Verizon Wireless noted yesterday that it sold 9.8 million smartphones in the fourth quarter of 2012. In the brief release, Verizon noted that the total smartphone sales included, “a higher mix of Apple smartphones.” Unlike its top rival AT&T (which yesterday said it had sold in excess of 10 million smartphones last quarter including record numbers of iPhone and Android devices), Verizon has shown more historical balance between the two dominant smartphone operating systems. In the third quarter of 2012, Verizon sold 3.1 million iPhones out of 6.8 million total smartphones, good for 45.5%. Verizon did not give a total on how many iPhones it sold in its SEC note for this last quarter, but expect the number to be closer to a 50-50 split with Android. iPhone’s Magic PowersThe question to be asked is why would Verizon mention iPhone channel sales in its SEC note at all? Well, right or wrong, the carriers (and hence, investors) tend to think of iPhone owners as more lucrative consumers. iPhone owners tend to be loyal, thus giving carriers guidance for how many net postpaid subscribers they will have years down the line. So, if I am a carrier, I want to show investors that I have a large amount of people using Apple products as a sign of the health of my business. As the rest of the fourth quarter smartphone sales from the top carriers in the United States come in, we are once again likely to see that Apple dominates the top of the American smartphone market. In Q3 2012, Apple controlled about 58.1% of U.S. smartphone sales for the three largest carriers (AT&T, Sprint and Verizon). Of those three, AT&T provides the biggest cushion for Apple, taking between 70%-80% of its total smartphone sales. As noted yesterday, AT&T likely sold more than 7.6 million smartphones last quarter. If Apple has a stronger 4Q with Verizon, the iPhone may break the 60% mark for control of U.S. marketshare among the big three. Research analytics firm comScore notes that Android still controls the overall U.S. smartphone market. According to comScore Mobile Lens, Android U.S. subscribers grew 1.1% between Aug. 2012 and Nov. 2012 to a total of 53.7%. Apple grew 0.7% in that same period to 35% of U.S. subscribers.  The Rise and Rise of Mobile Payment Technologylast_img read more

Read More →

T-Mobile May Have Killed The Smartphone Contract, But It Doesn’t Save You Money

first_imgRole of Mobile App Analytics In-App Engagement Tags:#Carriers#smartphones#T-Mobile Let’s compare T-Mobile to AT&T, which uses the “traditional” subsidy model. If we go with the baseline plan for AT&T, we are paying $40 for 450 minutes of voice time and unlimited texts (an unlimited voice plan will go for $70). Then you add in a data plan tier and messaging. The most popular is 3GB for $30 and $20 for unlimited texts. That is $90 a month, or $2,160 over a 24-month contract. Now, assume that you paid for a $200 subsidized iPhone or brand-new Android. That shakes out to $2,360. So, the difference between the most comparable plans on AT&T and T-Mobile are about $100 in favor of T-Mobile. Good, but not exactly earth shattering.So, if we look at the baseline plan plus cost of device between AT&T and T-Mobile, you actually pay less on a contract with Ma Bell than you do with Big Pink over 24 months. Depending on the smartphone you buy, you can end up paying more per month and over a 24-month period with T-Mobile.Does that contract really look so bad now?T-Mobile’s MotivationsIf you ever listen to a quarterly earnings call from the executives at AT&T or Verizon, they often lament the damage to their bottom line that smartphone subsidies do to them. You may pay $199.99 for a new Samsung Galaxy device from Verizon but the carrier is paying the full $550. That is millions of dollars in upfront costs that the carriers absorb.The iPhone is especially cumbersome on carriers’ bottom lines and the more smartphones the carriers sell, the worse for wear their quarterly earnings are. AT&T, Verizon and Sprint (and yes, T-Mobile) make up the money through the life of a contract. If a user wants out of that contract, they have to pay an early termination fee. T-Mobile is doing away with the subsidy by passing on the cost of the phone directly to the consumer. You may not be on a contract, per se, but you are still going to pay a termination fee (the remaining cost of the device plus any other T-Mobile fees) if you want to leave. T-Mobile will also allow users to bring their own smartphones with them. So, if you have an unlocked iPhone from AT&T, all you need to do is get a $10 T-Mobile SIM card and activate it on T-Mobile. That way T-Mobile doesn’t have to deal with the smartphone manufacturer at all and can just make money providing data. Too bad it is currently illegal to unlock your cellphone.The aim for T-Mobile is to take over the bottom of the smartphone market in the U.S. Users that do not need a lot of data and want a very cheap phone can do very well on T-Mobile’s plan. If you want an older phone, like the Samsung Galaxy Exhibit, you will pay $240 for the phone and $1,200 for 500MB of data a month over 24 months. Unless you want to get a straight pre-paid plan from the likes of Cricket, that is about as cheap as it gets among the four major carriers.The fact of the matter is that, one way or another, you are going to pay both the carrier and the smartphone manufacturer. There is really no way around it. The wireless carriers in the U.S. will always try to convince you that their service is better, faster, cheaper. The fact of the matter is that you will pay nearly the same (within a couple hundred dollars) no matter which carrier you choose. If you want a new, top-end smartphone, you are likely better off with the two-year contract from one of the larger carriers. Update: Article updated to reflect $20 text messaging per month charge from AT&T. dan rowinski What it Takes to Build a Highly Secure FinTech … Related Posts Why IoT Apps are Eating Device Interfaces In the United States, the smartphone contract is king. T-Mobile, the smallest of the major American wireless carriers, wants to end the reign of two-year contracts, phone subsidies and early termination fees. It even argues it can save you money in the process.Well, at least part of that is true.T-Mobile is instituting its plan to kill the subsidy-and-contract model for U.S. smartphone buyers. Instead of paying one lump sum for a smartphone and 24 months worth of contract, consumers can pay a minimal upfront cost of a smartphone and then a monthly fee as part of their bill.For instance, if you want to buy a Samsung Galaxy S3 with 16GB of storage from T-Mobile, you can pay $69.99 up front and then $20 a month on top of your phone bill for 24 months. If buyers prefer, they can pay the full amount of the phone up front and skip the monthly installments. T-Mobile’s wireless plans start at $50 for one line and 500 MB of data. Users get 2GB of data for $60 and unlimited data for $70. Add the monthly smartphone fee into the equation and users are still going to get cellphone bills between $70-$100 on a monthly basis. How The Numbers Break DownOn one hand, consumers will be happy with the fact that they are not on a contract. Ostensibly, that means they can leave whenever they want. But that still have to have to pay for the smartphone they bought. One way or another, users are going to pay for the entire unsubsidized portion of your new smartphone.For instance, if you choose to get a Samsung Galaxy S3, you are going to eventually pay $550 for the phone. You pay $70 up front plus the $20 fee per a month. If you get the get the unlimited data plan at $70 a month, your total cost is $2,230 for the life of the phone. If you look at the fine print in T-Mobile’s contract, it will start throttling users back to “2G” speed after 5GB of data use. If you go with the bottom-tier plan at 500MB of data, the total cost of ownership is $1,750. The most popular tiered plan will likely be the $60/month for 2GB of data. That will run you a total $1,990.For a comparison, the averages users consume about 2.3 GB of data per month. That includes moderate to heavy usage without playing an excessive amount of videos or using your smartphone as a hotspot (which usually requires a separate charge from the carriers).T-Mobile does have cheaper phones available. The Windows Phone 8X from HTC costs $0 at checkout and $18 a month for a total of $432. A Nexus 4 will cost you a down payment of $49.99 and $17 a month for a total of $457.00. The Rise and Rise of Mobile Payment Technologylast_img read more

Read More →

So Dropbox Can Be Hacked—What Else Is New?

first_imgWe appreciate the contributions of these researchers and everyone who helps keep Dropbox safe. However, we believe this research does not present a vulnerability in the Dropbox client. In the case outlined here, the user’s computer would first need to have been compromised in such a way that it would leave the entire computer, not just the user’s Dropbox, open to attacks across the board. Yet another reason to secure those computers. Spread the word.  adriana lee How Intelligent Data Addresses the Chasm in Cloud Related Posts Cloud Hosting for WordPress: Why Everyone is Mo… Dropbox has had its share of security woes. One day, wayward code breaks authentication protocols. Another time, user logins get stolen from third-party sites. Now it’s a couple of researchers stretching their hacking muscles and proving they could lay waste to Dropbox’s security measures. For users, this may be genuinely alarming news—particularly for those who depend on Dropbox heavily. I certainly do. So perhaps I should feel upset or unnerved by this. But I’m not. At all. Here’s why. How Dropbox Got Ripped OpenWhat’s clear is that these researchers have no bad intentions. Dhiru Kholia and Przemyslaw Wegrzyn, authors of the paper “Looking inside the (Drop) box” (PDF), just wanted to prove they could do it. And they did. They wowed the developer community by reverse engineering the cloud storage service’s desktop application. Reverse engineering, or figuring out an app’s development by working backwards starting with its finished product, is a fairly common practice. But few thought Dropbox could be vulnerable to it. The app was written in Python and relied heavily on obfuscation, meaning it was intentionally designed to conceal source code. But that didn’t stop Kholia and Wegrzyn. They write:We describe a method to bypass Dropbox’s two-factor authentication and hijack Dropbox accounts. Additionally, generic techniques to intercept SSL data using code injection techniques and monkey patching are presented.In other words, they were able to make modifications without altering Dropbox’s original source code. They also exploited the “Launch Dropbox Website” feature, an item located in the Windows system tray that lets users auto-login to the website. The handling of that in the current version of Dropbox is more secure than in the previous ones, but legacy users could still be at risk of having their accounts breached.This is an impressive feat, even if it is fraught with some scary potential. The team showed that it’s possible to blast through Drobox’s two-step login security, hijack accounts and expose code that could allow crafty hackers to devise some ingenious (or malicious) programs.Fortunately, the researchers have no mischief in mind. They only wanted to prove a point: Blocking access to underlying code doesn’t necessarily stop hacks. All it does is impede well-meaning developers from vetting it properly. Prepping For Cloudy DaysSee also: Sorry, Dropbox: The Hard Drive Is Here To StayOf course, that doesn’t mean some black-hat hacker won’t use these exploits to plunder Dropbox users’ data. That’s no small matter, considering the company has 175 million users.That’s a lot of gigabytes pulsing through the Dropbox cloud. For my part, I make sure that my most sensitive information isn’t among them. I store important logins and other personal data locally (either in my laptop or on an external drive). Some files, of medium importance, get either encrypted or password protected. What remains is detritus or items of lower priority.I may be atypical, but while I like and use services like Dropbox for convenience, I do so knowing they aren’t impregnable. In fact, I operate under the assumption that hacks and breaches are inevitable. That’s either paranoid or savvy, depending on your point of view. Either way, it offers some peace of mind whenever the clouds get a little stormy. Feature image courtesy of Flickr user Derek KeyUPDATE: I reached out to Dropbox for a comment, and received the following via email from a company spokesperson:  Serverless Backups: Viable Data Protection for … Top Reasons to Go With Managed WordPress Hosting Tags:#cloud storage#Dropbox#hacking last_img read more

Read More →

How to Use Tech to Lower Your Production and Operating Costs

first_imgRelated Posts Brad is the editor overseeing contributed content at ReadWrite.com. He previously worked as an editor at PayPal and Crunchbase. You can reach him at brad at readwrite.com. Why IoT Apps are Eating Device Interfaces How OKR’s Completely Transformed Our Culture Tags:#3d printing#automation#cost reduction#cost savings#efficiency#operating costs#production costs#technology Brad AndersonEditor In Chief at ReadWrite What it Takes to Build a Highly Secure FinTech … Businesses succeed by increasing revenue while lowering costs. When you consider how technology plays into that equation, eye-catching innovations may be the only tech that comes to mind. That’s partly because we tend to adapt so quickly to the technologies we adopt that ultimately we take them for granted, overlooking the cost-saving benefits of the tech unless it’s perceived as groundbreaking.But even tech advances that seem minor can be a powerful approach to reducing your production and operating costs — often dramatically.3 Ways Tech Can Lower CostsSome of the seemingly simple tech tools that you use every day have a big impact on your productivity. When was the last time you or one of your employees had to retype an entire letter because of a spelling error? Word processors, laser printers, email, and other communications technologies make that unthinkable today. Similarly, paper spreadsheets have been replaced with Excel, and generally doing things “by hand” seems antiquated and foolishly inefficient.Yet business leaders seem to forget the important lessons of those transitions to more modern methods, continuing to spend more on their operations than they have to. The technology is ready if only the decision makers would realize the benefits of making changes, large or small.To get a better sense of how technology can reduce your production and operations costs, let’s take a look at three areas that are ripe for tech’s aid:1. Slashing building costsAround half of your energy costs are related to your HVAC and lighting systems. By focusing on these areas, you’ll be sure to see cost savings. Utilizing LED light bulbs, which last longer and use less energy, can contribute to your cost savings efforts. So can geothermal technology, which uses the solar energy stored underground to provide heating and cooling. For instance, GeoComfort estimates that Illinois-based trucking company Nussbaum Transportation will see an annual savings of as much as 70 percent on the heating and cooling of its corporate campus because of the new facility’s geothermal technology.Not only can tech help you improve overall energy efficiency, but it can also help you control when electricity is needed. Implementing smart tech controls, such as remote-controllable smart thermostats and lighting controls, can help you fully reap the benefits of the energy-efficient solutions you adopt. These devices ensure you’ll only be using and paying for energy that’s actually required to keep your business running, day in and day out. What’s more, these energy-efficient technologies may save your business even more if they qualify for installation rebates or other incentives from local, state, and federal governments.2. Saving on human capitalWith automation, you can use your team more wisely. According to a Smartsheet survey, 78 percent of information workers are excited to see automation decrease the time they spend on repetitive tasks so they can focus on the more rewarding aspects of their work. If you automate enough tasks to free up an entire position, consider a new role for that individual involving higher-level responsibilities that could help bring in more revenue for your business. Automation can make jobs safer for employees, too. For example, robotic lines on assembly lines means workers aren’t exposed to dangerous situations or the constant repetitive tasks that can cause muscle and joint issues over time. Fortunately, there’s still a broad scope for further automation; McKinsey reports that $2.7 trillion out of $5.1 trillion of global manufacturing labor could be automated with technology that was already available in 2015. New devices, processes, controllers, software, and other tech advances come on the market every week and could yield even greater results.3. Leveraging 3D printing3D printing is booming, and organizations are finding more ways to adopt it as a cost-reducing method. With 3D printing, you can manufacture your own tools or materials. European carmaker Opel, for example, reported saving 90 percent of its assembly tool costs this way. If needs change, a new variation of a tool can be printed easily. The increased speed and flexibility of creating new objects with 3D printing versus traditional methods may be one of the most powerful arguments for adopting the technology.3D printing can also speed up product development because you can eliminate bottlenecks associated with traditional approaches. For example, with injection molding, you have to create an expensive mold before moving forward, a process so time-consuming that you really need to get it right the first time. 3D printing doesn’t require a mold, meaning you can zoom past that step and create a product faster and more affordably. You can also print in patterns, such as lattices or honeycombs, that aren’t created as easily with other techniques.Technology has the potential to accelerate business operations even more than it already has. Advocate for the adoption of these technologies, and you’ll make a valuable contribution to the lasting success of your company. Follow the Pucklast_img read more

Read More →

The Caregiver in the Room – Upcoming Webinar!

first_img This MFLN-Military Caregiving concentration blog post was published on April 14, 2017. Please join us Wednesday April 26 at 11:00 a.m. Eastern, for our free webinar entitled ‘The Caregiver in the Room: Considerations for Providers Working with Families.’In this training session, we will examine interpersonal communication skills and strategies for providers collaborating with family caregivers. Additionally, we will examine the core challenge of how to respect the autonomy of the client while also communicating with his or her family caregiver. Specific topics that will be addressed include protecting face, offering support, and providing comfort.Our presenter for this training is Dr. Leanne Knobloch, Professor and Director of Graduate Study in the Department of Communication at the University of Illinois.Continuing Education Credit Available!The MFLN Military Caregiving concentration has applied for 1.0 continuing education credit from The University of Texas at Austin School of Social Work for credentialed participants. Certificates of Completion will also be available for training hours as well.Interested in Joining the Webinar?To join this event, simply click on “The Caregiver in the Room: Considerations for Providers Working with Families.” The webinar is hosted by the Department of Defense APAN system, but is open to the public.If you cannot connect to the APAN site, an alternative viewing of this presentation will be running on YouTube Live. Mobile options for YouTube Live are available on all Apple and Android devices.last_img read more

Read More →

10 More Time Saving Tips in After Effects

first_img6. After Effects Work AreaThe Work Area is the part of the Composition that is previewed when you do a Ram Preview (shortcut is zero on the number pad).Move your playhead where you want your work area to start. Press BMove your playhead where you want your work area to end. Press N.To trim your Comp to the length of your Work Area, right-click on it and select “Trim Comp to Work Area” Looking to optimize your After Effects skills?  Here are 10 more After Effects tips that will save you time and let you work more efficiently!Our previous post on 10 Time Saving Tips in After Effects was quite popular, so I’ve rounded up 10 more tips for editors learning After Effects.1. Bringing Video Editing Projects into AEIf you’re using an Adobe based post production workflow, Premiere Pro & After Effects connect easily via Dynamic Link and Import (links to previous blog posts on this feature). Now, let’s take a look how to get your projects from other video editing apps into After Effects:Media Composer to After Effects: Export Sequence as AAF, then import in AE using Pro Import Pro (previously Automatic Duck). Kevin P. McAuliffe has a video tutorial on the process here.Final Cut Pro 7 to After Effects: Export as XML and import via Pro Import in After Effects.Final Cut Pro X to After Effects: There are 2 choices (free and paid). Clip Exporter is a free app you can get hereXto7 for Final Cut Prois $49.99 and developed by Assisted Editing, which make a variety of editing helper apps.2. Get Organized for Free – Post HastePost Haste is a free app that that allows you to setup file and folder templates for your projects. It offers a variety of templates (motion graphics, video editing, visual effects) that you can customize. Let the computer do the tedious work like organization, so you can spend your time making cool stuff.3. Sync Your Settings (for After Effects CC)The new Adobe After Effects CC has a feature that allows you to sync your preferences, shortcuts and more across multiple computers.  You simply ‘login’ to your account through AE and your preferences will populate. This is huge for freelancers and those who often work in different edit suites at large companies.See the Adobe AE Blog for the complete list and what doesn’t sync (feature that are machine-specific).4. Automate to SequenceThis is a quick trick to put clips in order and add a dissolve between each one (also works in Premiere Pro). Ideal for quickly creating a highlights reel or photo slideshow. You can apply ‘Automate to Sequence‘ to clips in the project or those already in a composition.From the Project: Select the clips you want, then File > New Comp from Selection ( or right click and select “New Comp from Selection”). Click “Single Composition”, “Sequence Layers” and “Overlap” if you want a dissolve.From a Composition: With the layers stacked on top of each layer, select all, right click and choose “Keyframe Assistant > Sequence Layers”.5. Move Your Anchor Point Without Moving the LayerAn anchor point determines where a layer scales and rotates from. If you change anchor point under Transform you will move the layer. To change the anchor point without moving the layer, use the Pan Behind tool (shortcut is Y). Click on the anchor point and move it to the desired location, then press V to switch back to the Selection tool.To make life easier, move you anchor point with the pan behind tool before you animate. Got an After Effects quick tip to share?Let us know in the comments below!center_img 7. Working with Audio in After EffectsPressing the spacebar in After Effects won’t preview audio like it will in a video editing app. Here’s a few After Effects shortcuts for working with audio.To preview audio in After Effects:  Press . on the Number Pad to just preview audio. Press zero on Number Pad to preview both video and audio.Scrub Audio: Hold Command (on Mac) or Control (on PC) to move the playhead in a comp while scrubbing the audio.See Audio Waveforms in Compostion: Shortcut is LL8. U key toggles keyframes/expressions, UU toggles modified propertiesThis is a quick way to view or hide keyframes for a layer or multiple layers. As an example, say I keyframe an opacity change and set the scale to 31%. If I press U, I see the opacity keyframes. If I press UU, I see the opacity keyframes and the scale change.OriginalPressing UPressing UU9. Collect FilesCollect Files is equivalent to Media/Project Management in Premiere Pro or FCP. You may use this feature when you want to move your After Effects project and corresponding media to another hard drive.With a project selected, choose File > Dependencies > Select Files.If you want to just collect files for a particular composition, select a comp first.A dialog box will appear. Click “Collect” to set a destination, then name your project that will be moved with the media and click Save.Example of ‘Collect Files’10. Add to Media Encoder Que (for After Effects CC)The advantage of creating video files in Adobe Media Encoder is that it has a wide range of presets to choose from (Vimeo, Youtube, Phones/Tablets).With After Effects CC, you can now add a Composition to Media Encoder from File > Export > Adobe Media Encoder Que, or Composition > Add to Adobe Media Encoder Que. This will launch Media Encoder and create a folder in the location of your project.last_img read more

Read More →

Nine 4K Cameras Under $4K

first_imgIt’s high-resolution, not high-price, that makes these cameras standout.4K is quickly becoming an industry standard in video and film production, but with all the 4K camera options it can be overwhelming to figure out which camera is right for you. In the following post we take a look at the notable specs of  our favorite 4K cameras – all under $4,000.If you have any other camera suggestions for the PremiumBeat community we would love to hear them in the comments below!1. Blackmagic 4K Production CameraThe Blackmagic 4K Production Camera is the only 4K camera on this list that has the ability to record in 12-bit 4K CinemaDNG RAW. This is incredibly important if you plan on doing serious color correction in post. Instead of recording to an SD card, the Blackmagic 4K records to SSD drives which can then be attached to a computer. Other notable features include 12 stops of dynamic range, a large 5 inch display, and a free DaVinci Resolve license with each purchase.Price: $2,9952. GoPro HERO3+ Black EditionAt 4K, the frame rate of the GoPro HERO3+ is only 15fps which isn’t cinematic, but certainly impressive. The GoPro HERO 3+ also has built-in time-lapse capabilities. In addition to shooting 4K footage, the HERO3+ has the ability to record 2.7k at 30fps. Pretty incredible quality at this low price.Price: $399.993. Leica D-LUX (Typ 109)Created by one of the world’s most prestigious camera manufacturer, it’s no surprise that the Leica D-LUX gives users 4K in an ultra-sleek camera body. The included lens is is a 24-75mm f/1.7-2.8  (very fast and versatile). While this isn’t a professional video camera, it is certainly a good option if you are looking for a point-and-shoot with 4K recording capabilities.Price: $1,1954. Panasonic FZ1000In line with the Leica D-LUX, the Panasonic FZ1000 is an incredibly dynamic camera that also includes a Leica lens. The camera has built-in 5 axis image stabilization and 4K recording at 30fps. The included lens is a 25-400mm (yes, you read that right) f/2.8-4 Leica lens. While you won’t be able to change out lenses with this camera it is ideal if you need to shoot 4K footage quickly.Price: $899.995. Panasonic LX100The Panasonic LX100 can be viewed as a combination of both the Leica D-LUX and the Panasonic FZ1000. It includes a 24-75mm f/1.7-2.8 Leica lens that is attached to the camera. Users can choose to record 4K footage at 24fps or 30fps and full HD up to 60fps. Manual control rings also make it useful in a production context as well.Price: $8996. Panasonic GH4After making a splash at NAB 2014, the Panasonic GH4 has quickly become a go-to camera for filmmakers everywhere. Not only does it record in 4K but it also gives users the added ability to output 10-bit 4K footage with the additional YAGH Unit. The GH4 has shown that you can get great 4K footage from a small DSLR-like body. It even beat a RED Epic and a Canon 5D Mark III in a recent sharpness test.Price: $1,699.997. Panasonic HC-X1000This isn’t your average camcorder. The Panasonic HC-X1000 records 4K footage at 24fps natively in-camera. It also gives users manual control with rings dedicated to focus, zoom, and aperture. The camera has built-in image stabilization that can correct handshake up to 4,000 times-per-second. In addition, the camera also has all the features you would want in a professional grade camcorder such as zebras, color bars, historgrams, and built-in ND filters.Price: $3,499.998. Sony FDR-AX100This Sony FDR-AX100 camcorder records 4K footage at 30fps. Sony has included as Zeiss lens with 12x zoom. Outside of 4k, users have the ability to record up to 120fps in HD. The camcorder also has Sony’s SteadyShot technology, so smooth images are incredibly easy to capture.Price: $1,9999. Sony A7SThe Sony A7S in an incredible little camera that debuted at NAB 2014. While you can’t record 4K footage in-camera you can output a 10-bit 4K signal to be recorded by an external recorder. More impressive than it’s ability to output 4K footage is it’s low-light capabilities. With expanded ISO that extends up to 409,600, the A7S can shoot properly exposed images in near darkness. It has a built-in 25 point autofocus system and the stills from this camera are quality. It accepts any lens with a Sony E mount and has a 3 inch tilting display.Price: $2,499We can expect to see more 4K cameras below $4,000 in the near future, but for now it seems like Panasonic is pushing this new technology faster than everyone else. We will be interested to see if other camera manufactures like Canon or Nikon will begin to produce more affordable 4K cameras.Have you used any of the cameras on this list? Share you experience in the comments below.last_img read more

Read More →

4 Premiere Pro Tips that Save Time

first_imgLooking to save time in Premiere Pro CC? These quick tips can speed up your workflow.Cover image via Shutterstock.Recently, we covered five practical tips that can increase your efficiency in DaVinci Resolve. Let’s have a look at few more tips, but this time in Premiere Pro CC. These tips aren’t life hacks, nor are they hidden treasures known only to Premiere’s developers. However, these tips often get overlooked in guides, and they can shave time off your project and reduce the number of steps from point A to point B. When you’re working with a deadline, any time you can save is crucial.Import SizeA lot of consumer-grade camera equipment can now record up to 4k, but the consumer demand for 4k footage isn’t that high yet. So, if you are editing 4k footage in a 1080p timeline, you may find yourself checking the Default scale to frame size in the general preferences tab to keep your imported footage at the same size as the timeline.However, this removes very important resolution data, and if you select this option, you might as well have just shot at 1080p. Instead, you want to hit set to frame size, which will decrease the scale size of the clip to 50%. This will allow you to increase and crop (if needed) the 4k footage.However, selecting this on every single shot is two mouse clicks too many. So, what we can do is open up the Preferences panel, go to the keyboard shortcuts, type in “set to frame size,” then pair it with an available key. I’ve chosen 1 on the number pad. Now you can decrease the scale of the clip to the correct timeline settings with the push of just one button.Bin TabI place every project of mine into an organized system of bins. Each scene has its own bin, and within that bin, the footage and audio for that scene also get their own folder. This keeps my project panel neat, and I know where everything is at any given time. However, by default, when you open a bin, it pops out into a new window. Before I knew that there was a setting to change this, I spent a lot of my time moving the bin window away from the source monitor.Bin windows appearing automatically over the source monitor isn’t the best default placement, since you’re going to want to preview the contents of the bin on the source monitor. Luckily, you can change this so the bin window opens within the project tab. Simply go to Preferences, select General, scroll down to bins, and change the double-click option from “open in new window” to “open in place.” Now, whenever you open a bin, it will remain within the project manager. This will remove the additional step of moving a bin around your interface while you preview files.Unlink AudioAlthough dragging footage into the timeline without the audio is a basic task, the web is full of questions about how to do this. If you’re working with drone footage or documentary footage — or anything that makes the sound redundant — it can become a pain to unlink the video footage and then delete it. There are several ways to simplify this.When your footage is in the source monitor, once you have marked your in and out points, drag the footage in by dragging this icon, instead of the clip itself.If you are dragging footage from the project window, first uncheck this icon, which will unlink video and audio, simplifying deletion. No need to right-click and press “unlink.”If you already have linked media on the timeline and going through it all is going to be too tedious, you can select all the clips that you need to unlink and hit CNTRL + L, then delete.Still SizeWhen you’re working with both stills and video footage, unless you’re performing an insert/replace within an already edited sequence, it may be somewhat redundant to bring the still into the source monitor. After all, it’s not as if you need to scrub through it. However, dragging stills from the project manager can become problematic if you need to trim every one. Fortunately, you can automatically set every still duration. Go back to the general preferences, and you will see this section.By changing the seconds duration, you can control how long each still appears in the timeline by default. This is very handy if you have a series of stills that are going to play one after another. You can just drag them straight from the media manager into the timeline.Do you know any Premiere Pro timesavers? Let us know in the comments.last_img read more

Read More →

On Unnecessarily Poor Language Choices

first_imgTonight a telemarketer called my Mom’s house. The telemarketer began the call with this: “Can I please speak to the male head of the house?” My Mom hasn’t been married or lived with a male head of household (whatever that is) since 1974. She is a very successful entrepreneur that raised four kids without an ounce of help from anyone (which explains why she is my personal hero).Fortunately, my Mom is super sweet and has a great sense of humor. But other people may not have been so forgiving. That sort of opening language did nothing to help the company that was calling. It did nothing to help establish a positive association with their brand. It did even less to help them gain a customer. It was an unnecessarily poor choice of words. It hasn’t been 1952 for a long, long time.This is an example of why language choices matter. And it’s a good opportunity to reflect on the language choices that you make.QuestionsWhat assumptions are embedded in your language?What beliefs or biases do your language choices reveal?Are some of the words you choose unnecessarily provocative or insulting to some people?Do you ask for the decision-maker? What does that suggest about how you are going to treat the real decision-makers whose consensus you are trying to gain? (as one example)last_img read more

Read More →