Pivot-Points Scraper
Pivot-Points Scraper
relational database concept. (means avoid inserting repeat data, check if data already exists,
and only update “end_date” of last insert keeping start date same. Will need to check all
columns for exact match. Also, avoid inserting lengthy text data types, insteads, insert “id” of
row from another table that has text value, this helps during queries.
You will only scrape currency pairs (with “/” like AUD/CAD, CAD/JPY..but skip BITCOIN, OIL,
GERMANY 40 etc)
There is Hourly, Daily, Weekly, Monthly tabs, you will scrape data for each tab and save in
database with a column value of “Hourly” or “Daily” for example. But you won’t insert text value,
instead you will use values as below
id Text value
4 Hourly
6 Daily
7 Weekly
8 Monthly
Don’t ask me why, because I have another table that has these values..and used for different
application.
Currency table will be as below-please make sure you exactly match id of each currency you will
be using (replace “/” with “_” underscore for currency name)
You MUST use currency id from above table that matches currency pair being scraped.
Otherwise it will NOT match with my overall application database queries.
First column id will be auto. Website_id will be “2” for dailyfx website.
https://www.actionforex.com/markets/pivot-points/
Pay attention to drop down for timeframe..We want to scrape only following timeframes
1 minute, 5 minute, 15 minute, hourly, 5 Hours (save as H4 with id of “5”), daily, weekly, monthly
(skip 30 minute)
Use table below for reference of ID.
Remember, not all websites will have all currency pairs, you need to carefully save in db with
correct id for that currency along with correct timeframe for that website id.
I wlil provide currency, time period tables as reference. You code should be modular, properly
grouped into classes, functions for repeatable tasks. Html, css, js separate where necessary.
Prefer to use light weight php or python libraries, to avoid too much dependencies.
This script will run as cron job on linux ubuntu server automatically every 5 minutes to scrape
fresh data.