6

Are there a free banknotes database. By saying banknotes database I mean database of all world currencies banknotes values.

For example for the US Dollar values are

$100, $50, $20, $10, $5, $1, 50c, ...

TIKSN
  • 163
  • 3
  • 2
    There used to be a 50c note in the states. But not today. So are you talking about historical notes as well? Or coins? Or is that just a typo? – sheß Sep 06 '15 at 10:53
  • No, nothing historical. Just current data. – TIKSN Sep 06 '15 at 11:36

2 Answers2

5

The data you're looking for seems to be available here: http://www.whichwaytopay.com/world-currencies-by-country.asp though not in a machine readable format. Yet this is a list where each entry is a link to the information you care for. So if you are familiar with any scripting/programming language, you should be able to crawl this site easily and extract that info.

sheß
  • 1,179
  • 5
  • 24
5

I started a basic scrape of the website in this answer.

Python 2.7 code:

# -*- coding: utf-8 -*-
import requests # getting webpages
from BeautifulSoup import BeautifulSoup # parsing HTML

def main():

    url = 'http://www.whichwaytopay.com/world-currencies-by-country.asp'

    s = requests.Session()
    page = s.get(url).text.encode('utf-8') # download main html

    soup = BeautifulSoup(page) # parse main html 

    suburl_list = get_href(soup) # function call to get http links in html page
    suburl_list = list(set([x for x in suburl_list if 'currency' in x]))  # get unique urls that have certain string

    with open('output.tsv','wb') as output:
        for suburl in suburl_list: # for each url found on the main page
            subpage = s.get(suburl).text.encode('utf-8') # download that html

            subsoup = BeautifulSoup(subpage.decode('utf-8','ignore')) # run it through the parser
            print suburl.encode('utf-8')
            try:
                text = subsoup.getText().split('CURRENCY:')[1].split('US DOLLAR ACCEPTED')[0].split('CREDIT')[0].split('CURRENCY')[0].split('TRAVELLER')[0]  # parse the HTML
            except:
                text = subsoup.getText() # didn't work

            text = text.replace('DENOMINATIONS:','\t')
            output.write((suburl+'\t'+text+'\n').encode('utf-8')) # write tab-separated file as output

def get_href(soup):

    suburl_list = []
    for anchor in soup.findAll('a', href=True):
        suburl_list.append(anchor.get('href'))

    return suburl_list

if __name__ == "__main__":
    main() # run the whole thing

It doesn't work very well, and the encoding is messed up.

You can find the output here. I had to do some manual post-processing. You'll see that the output is unstructured, with lots of custom notes, some currencies doesn't have coins, etc...


Next Step: it would be really cool if someone would take this messy file and fill in the blanks (for a few non-resolved URLs - see Vietnam), and put the data into a decent, machine readable format. Then publish as a github repo so we can all contribute.

philshem
  • 17,647
  • 7
  • 68
  • 170