10

I am using a CLI tool (from apache spark) that underneath uses boto. Although I have already confirmed that the

 AWS_ACCESS_KEY
 AWS_SECRET_KEY

are correct (via doing ec2-describe-regions) the authorization still fails:

 ec2/spark-ec2  -k mykey --copy -s 5 -i ~/.ssh/mykey.pem -t c3.2xlarge 
 -z us-east-1a -r us-east-1 launch mycluster

Note the final error after the stack trace:

<Response><Errors><Error><Code>AuthFailure</Code><Message>AWS was not able to
validate the provided access credentials</Message></Error></Errors>

Here is the full output:

Setting up security groups...
ERROR:boto:401 Unauthorized
ERROR:boto:<?xml version="1.0" encoding="UTF-8"?>
<Response><Errors><Error><Code>AuthFailure</Code><Message>AWS was not able to validate the provided access credentials</Message></Error></Errors><RequestID>f960cab0-bfe6-4939-913c-5fbc0bf8662f</RequestID></Response>
Traceback (most recent call last):
  File "ec2/spark_ec2.py", line 1509, in <module>
    main()
  File "ec2/spark_ec2.py", line 1501, in main
    real_main()
  File "ec2/spark_ec2.py", line 1330, in real_main
    (master_nodes, slave_nodes) = launch_cluster(conn, opts, cluster_name)
  File "ec2/spark_ec2.py", line 482, in launch_cluster
    master_group = get_or_make_group(conn, cluster_name + "-master", opts.vpc_id)
  File "ec2/spark_ec2.py", line 343, in get_or_make_group
    groups = conn.get_all_security_groups()
  File "/shared/sparkup2/ec2/lib/boto-2.34.0/boto/ec2/connection.py", line 2969, in get_all_security_groups
    [('item', SecurityGroup)], verb='POST')
  File "/shared/sparkup2/ec2/lib/boto-2.34.0/boto/connection.py", line 1182, in get_list
    raise self.ResponseError(response.status, response.reason, body)
boto.exception.EC2ResponseError: EC2ResponseError: 401 Unauthorized
<?xml version="1.0" encoding="UTF-8"?>
<Response><Errors><Error><Code>AuthFailure</Code><Message>AWS was not able to
validate the provided access credentials</Message></Error></Errors>
  • OMG... I had the same issue, and after enabling boto debug: boto.set_stream_logger('boto') , noticed the access_key had an %0D at the end, an non-printable char. I cleaned up the credential file and then it worked. – Julio Feb 19 '16 at 19:49

3 Answers3

17

Had a similar issue and decided to post it as an answer given that this may help others (coming here from Google):

Make sure the time on your machine is set correctly.

My machine's time was running just ~8 minutes ahead of the real time and this was causing 401 exactly as above.

If you are on Linux you can do the following to synchronize:

sudo ntpdate us.pool.ntp.org
scai
  • 1,062
akhmed
  • 270
3

Man oh man. There is a $HOME/.boto file that saves your old authentication values. Most of a day lost due to this !!

cat ~/.boto

[Credentials]
aws_access_key_id=MY*OLD*ACCESS*KEY 
aws_secret_access_key=MY*OLD_SECRET*ACCESS*KEY
  • I dont have any such folder exists in my home directory. leo@leo-OptiPlex-3020:~$ ls -la ~ | grep -i boto leo@leo-OptiPlex-3020:~$ – user169015 Oct 20 '15 at 13:23
  • 1
    The ~/.boto file is just one of the several places that boto will check for credentials. It is usually created manually, though a setup process could have created one for you. – SunSparc Jan 13 '16 at 20:32
2

For me the cause was different. I'm using temporary AWS credentials, that are comprised of AWS_ACCESS_KEY, AWS_SECRET_KEY and AWS_SESSION_TOKEN. After enabling boto debug boto.set_stream_logger('boto'), I've noticed that only AWS_ACCESS_KEY and AWS_SECRET_KEY were loaded from env, and AWS_SESSION_TOKEN wasn't. Inspection of the code seems to confirm this:

https://github.com/clari/clari_dynamo/blob/master/boto/boto/provider.py#L307

What worked for me, was passing the token explicitly, when setting up ec2 connection:

ec2.connect_to_region(region, security_token=os.environ.get('AWS_SESSION_TOKEN', None))
Marcin
  • 121
  • 2