Visiting many separate websites frequently to find out if content on the site has been updated can take a long time. Aggregation technology helps to consolidate many websites into one page that can show the new or updated information from many sites. Aggregators reduce the time and effort needed to regularly check websites for updates, creating a unique information space or personal newspaper. Once subscribed to a feed, an aggregator is able to check for new content at user-determined intervals and retrieve the update. The content is sometimes described as being pulled to the subscriber, as opposed to pushed with email or IM. Unlike recipients of some push information, the aggregator user can easily unsubscribe from a feed. Aggregation features are frequently built into web portal sites, in the web browsers themselves, in email applications or in application software designed specifically for reading feeds. The aggregator provides a consolidated view of the content in one browser display or desktop application. Aggregators with podcasting capabilities can automatically download media files, such as MP3 recordings. In some cases, these can be automatically loaded onto portable media players (like iPods) when they are connected to the end-user's computer. By 2011, so-called RSS-narrators appeared, which aggregated text-only news feeds, and converted them into audio recordings for offline listening. The syndicated content an aggregator will retrieve and interpret is usually supplied in the form of RSS or other XML-formatted data, such as RDF/XML or Atom Examples of this sort of website are Google News, AdeptPage, newsinfeed.com, EExpoNews, Drudge Report, Huffington Post, Zero Hedge, Reddit [1] Newslookup, Wire-News, Newsvine, World News (WN) Network, and Daily Beast where aggregation is entirely automatic, using algorithms which carry out contextual analysis and group similar stories together, while other sites supplement automatically-aggregated stories with manually curated headlines and their own articles.[2] News aggregation websites began with content selected and entered by humans, while automated selection algorithms were eventually developed to fill the content from a range of either automatically selected or manually added sources. Google News launched in 2002 using automated story selection, but humans could add sources to its search engine, while the older Yahoo News, as of 2005, used a combination of automated news crawlers and human editors.[3][4][5]Web-based feeds readers allow users to find a web feed on the internet and add it to their feed reader. Online feed readers include Bloglines, Feedly, Feedspot, FeedRover, Flipboard, Digg, News360, My Yahoo!, NewsBlur,[6][7] and Netvibes. These are meant for personal use and are hosted on remote servers. Because the application is available via the web, it can be accessed anywhere by a user with an internet connection. More advanced methods of aggregating feeds are provided via Ajax coding techniques and XML components called web widgets. Ranging from full-fledged applications to small fragments of source code that can be integrated into larger programs, they allow users to aggregate OPML files, email services, documents, or feeds into one interface. Many customizable homepage and portal implementations provide such functionality. In addition to aggregator services mainly for individual use, there are web applications that can be used to aggregate several blogs into one. One such variety—called planet sites—are used by online communities to aggregate community blogs in a centralized location. They are named after the Planet aggregator, a server application designed for this purpose.