Scrapes labels from etherscan, bscscan & polygonscan, arbitrium, fantom, avalanche website and stores into JSON/CSV.
Chain | Site | Label Count | Status | Last scraped |
---|---|---|---|---|
ETH | https://etherscan.io | 29945 | ✅ ok | 19/6/2024 |
BSC | https://bscscan.com | 6726 | ✅ ok | 27/3/2024 |
POLY | https://polygonscan.com | 4997 | ✅ ok | 27/3/2024 |
OPT | https://optimistic.etherscan.io | 546 | ✅ ok | 29/5/2024 |
ARB | https://arbiscan.io | 837 | ✅ ok | 26/7/2024 |
FTM | https://ftmscan.com | 1085 | ✅ ok | 29/9/2024 |
AVAX | https://snowtrace.io | 1062 | ✅ ok | 15/10/2024 |
Total Chains: 7
Total Labels: 45198
- On the command-line, run the command
pip install -r requirements.txt
while located at folder with code. - (Optional) Add ETHERSCAN_USER and ETHERSCAN_PASS to
sample.config.json
and rename toconfig.json
- Run the script with the command
python main.py
. - Proceed to enter either
eth
,bsc
orpoly
to specify chain of interest - Login to your ___scan account (Prevents popup/missing data)
- Press enter in CLI once logged in
- Proceed to enter either
single
(Retrieve specific label) orall
(Retrieve ALL labels) - If
single
: Follow up with the specific label e.g.exchange
,bridge
.... - If
all
: Simply let it run (Take about ~1h+ to retrieve all, note that it occassionally crashes as well) - Individual JSON and CSV data is dumped into
data
subfolder. - Consolidated JSON label info is dumped into
combined
subfolder.