Eran Halevy – Guest Writer, Entrepreneur Magazine
Companies don’t just vacuum up data on competitors’ prices; some gain advantage by distorting the picture competitors see.
In January 2017, news broke that Amazon had successfully managed to block bots from Walmart, which would scrape Amazon’s listings “several million times a day.” In the Reuters report, the Chief Executive of Boxed, a New York-based online wholesaler, spoke of scraping competitor prices every 20 minutes and adjusting accordingly, saying, “If we’re not decently priced, we’ll see it almost immediately [in sales declines].”
Knowledge is power.
What started as a one-way tool to extract web data and increase competition for the benefit of consumers turned into an arms race in which the target websites try to sabotage the data collection in order to achieve a competitive advantage. Third-party services have emerged to help target websites identify and block competitors scraping their data.
More cunning is serving falsified information — serving bots a higher-than-actual price, for example — to foil the scraper’s plan, rather than the mechanism.
To avoid the problem of falsified information (also called spoofing or cloaking) or getting blocked, companies have employed proxy networks, which are data-center-based routers through which they route, or proxy, their requests, to hide their identities. However, these networks can be identified by savvy companies. The need for a solution came in the shape of peer-to-peer networks (P2P), also known as the residential IP network.
P2P networks consist of consumers who are willingly routing some commercial requests through their IP in return for benefits (e.g: free use of applications, ad-free browsing, using the P2P network themselves and more). Thus, companies collecting intelligence through such networks can see the web as consumers see it without being at risk of getting spoofed or blocked.
The potential of scraping goes far beyond price wars. The internet is awash with unstructured data just waiting to be tapped.
How companies use data scraping.
Some companies generate high-quality sales leads rather than buying contact lists and get higher quality prospects in the process. Some scrape job boards to find companies that are growing, and they monitor social media for firms that have just won funding.
For example, Proven is a skincare company that scrapes customer reviews to create highly personalized products. They’ve built a continually updated database of 8 million reviews, 100,000 beauty products and 4,000 scientific articles about skincare and the ingredients used in products. Their machine learning algorithm discovers the links between these to develop cleansers, creams and toners highly customized to age, skin type, ethnicity and conditions like acne. Customers fill out a questionnaire to fit them into an AI-assisted skin profile and are recommended a skincare regime.
The arms race is also rampant in the online advertising industry. For example, large ad publishers need to make sure that hackers don’t use their programmatic advertising platforms to spread viruses and malware to the end user. So they constantly scrape the incoming ad servers to make sure the content is safe and legitimate.
The problem is that when the hackers recognize a publisher is calling their servers, they send a real ad so it appears all is well. If the ad publisher can appear as a regular online user, it will be served the fraudulent ad, which they can then prevent from being published. The ability to scan ad servers as regular consumers is how they keep their audience safe from fraudulent and potentially dangerous ads.
Get creative, and you can disrupt any industry with scraping.
Original article on Entrepreneur.com
Eran Halevy – Guest Writer, Entrepreneur.com
Freelance data security consultant and user acquisition expert