Author: Carlos Tapia, Nordregio
Web scraping
Web scraping is a method for mechanically gathering data from webpages. It entails utilizing software to extract data from online pages, including text, photos, and tables. Web scraping can be used to collect pertinent information about farming, the weather, nearby events, or community services in rural locations.
This information can be examined to comprehend rural patterns, keep track of environmental changes, or evaluate the resources that are available in various areas. With the use of web scraping, researchers and decision-makers can access a variety of data fast and effectively, supporting programs for rural development with current and thorough data.
Web scraping in GRANULAR
In GRANULAR, web scraping will be used to collect and analyse real-time data on property transactions. These data are valuable for studying the performance of local housing markets, particularly in areas where tensions may arise due to external shocks.
By examining these property transactions, we can gain insights into the dynamics of the housing market and make informed decisions to address emerging challenges in affected regions.
Case study: Rural housing in Sweden
In Sweden, there is no official database of real estate prices and registrations of titles including geocoded information. The lack of this data prevents limit our knowledge of housing markets and variations, in particular outside the big urban centres.
Our GRANULAR partner, Nordregio has performed a web scrapping exercise to analyse the housing markets of the two northernmost remote rural regions of Sweden: Swedish northernmost Counties of Norrbotten and Västerbotten.
This page navigates you through the entire process, from data scraping to data cleaning, interpolation and analysis, starting from free online data on housing markets in rural areas. The method used can be transferred to regions having similar datasets.