I was content with the AWS CLI initially, but once I had enabled and required multi-factor authentication (MFA), using the AWS CLI became a bit of a nuisance. So I started using aws-mfa. I also wrote a post about it here, titled Using MFA with AWS CLI.
The next tool I started using was Awsume. This Python package makes it easier to manage your AWS credentials and sessions, especially if you are accessing more than one account.
Once we started using Single Sign-On (SSO) more and more at work, I switched to using AWS Vault. I think the reason I switched to AWS Vault was simply because Awsume did not support SSO, but to be honest, I am not sure about that. (Neither do I know if Awsume supports SSO at the moment.) Either way, what I really like about AWS Vault is that it stores the credentials in an encrypted password store. So I no longer had credentials lying around on my hard drive in plain text.
After reading the article
Taking AWS Account Logins For Granted,
I also switched to using Granted. In contrast to AWS
Vault, Granted does not seem to have a way to securely store your long lived
credentials. If you are using those, you may want to use AWS Vault for those.
Since both tools use the same ~/.aws/config
file, you can use them next to each
other without a problem.
And that’s the current state of affairs for me. Since I effectively only deal
with SSO logins, I use Granted and its assume
command for my day-to-day.
Combined with the
Granted add-on for Firefox
I can easily use multiple accounts and roles on the command line and in my
browser.
Before I dive into what I did to resolve the situation:
The process of requesting or generating a new certificate is out of scope for this article. And if you are in the same situation as I was (with an expired certificate installed on the device), the “get a new certificate” part is probably something you have dealt with before. In my case I generated my own certificate signed by my own CA.
After obtaining the new certificate I could:
scp
to copy the new certificate and the private key to the NAS./usr/syno/etc/certificate/system
.FQDN
and default
directory, with
the new certificate and key (details below).Now for the details about the files. Both the default
directory and the FQDN
directory contained the same files:
cert.pem
: the certificate itself (in my case the certificate + intermediary)chain.pem
: the certificate chain (in my case the CA certificate)fullchain.pem
: a concatenation of the files cert.pem
and chain.pem
privkey.pem
: the private keySince the script I use to generate my certificates outputs the certificate plus intermediate, I guess that’s what I used in the web interface when I uploaded my certificate a year ago. I decided to do the same this time since it had been working so far.
After the reboot the web interface was using the new certificate and I could access the NAS again with my browser. But according to the control panel, I was still using an expired certificate.
So as a last step I updated the certificate via the web interface and after that everything was working again.
]]>Since it took me a little while to figure out how to get it working in the first place, and get a comfortable workflow in the second place, I decided to write it down here for future reference.
First a bit of context.
One of the AWS best practices is to enable multi-factor authentication (MFA) for privileged users. You can even create a policy to require MFA. This means that users affected by the policy will have to enter their MFA code to log in via the web console, but also if they want to access the AWS APIs from the command line (e.g. in a Python script or using the AWS CLI).
Without MFA, you can configure your credentials in ~/.aws/credentials
and that
file might look like this:
[default]
aws_access_key_id = AKIAIFFJM737PF5LH4VA
aws_secret_access_key = +gGgW9ElnjG73//uJxziR8dvgQzs++SFbR4at1Q9
(Don’t worry, the credentials have already been revoked.)
Assuming you have installed the AWS CLI, you can now interact with the AWS API like this:
$ aws s3 ls example.user.bucket
2019-02-19 20:23:44 3023 Vagrantfile
2019-02-19 20:23:42 372 config.yml
2019-02-19 20:23:44 206115 screenshot-1.png
But when the use of MFA is enforced, you’ll see this:
$ aws s3 ls example.user.bucket
An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied
Before we can continue, you’ll have to configure an MFA device. See the official documentation on enabling a Virtual Multi-factor Authentication (MFA) Device for instructions.
Make note of the assigned device, we’ll need it in a moment.
To be able to use the CLI, you’ll have to request a so called “session token”:
$ aws sts get-session-token \
--serial-number arn:aws:iam::xxxxxxxxxxxx:mfa/example.user \
--token-code 287632
{
"Credentials": {
"AccessKeyId": "ASIARYSXPEDHOE73FL3V",
"SecretAccessKey": "S1rS...F8KL",
"SessionToken": "FQoG...seMF",
"Expiration": "2019-02-20T07:36:27Z"
}
}
Note the 6 digit code at the end of the command, this is the code generate by your MFA device (probably Google Authenticator or a similar app).
You can use these credentials in several ways:
Expose them via environment variables:
$ export AWS_ACCESS_KEY_ID=ASIARYSXPEDHOE73FL3V
$ export AWS_SECRET_ACCESS_KEY=S1rS...F8KL
$ export AWS_SESSION_TOKEN=FQoG...1seMF
Put them in the credentials file (~/.aws/credentials
) as a new profile:
[default]
aws_access_key_id = AKIAIFFJM737PF5LH4VA
aws_secret_access_key = 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
[session]
aws_access_key_id=ASIARYSXPEDHOE73FL3V
aws_secret_access_key=S1rS...F8KL
aws_session_token=FQoG...1seMF
Note that with this solution, you’ll have to specify the new profile when
using the CLI, e.g. “aws s3 ls example.user.bucket --profile session
”
Since the token is only valid for 12 hours in my case, the manual workflow means you would have to do the copy/paste routine at least once per (work)day. Since I don’t like repetitive tasks and this is an excellent task to (at least partially) automate, let’s make the computer do the repetitive work for us.
Obviously, I’m not the first one to want this, so I use a tool that already
exists: aws-mfa
(PyPI,
GitHub).
There are alternatives that do, more or less, the same thing. What I liked about this tool:
As described in the documentation of the tool, you’ll have to modify your
credentials file (again: ~/.aws/credentials
) a little bit:
[default-long-term]
aws_access_key_id = AKIAIFFJM737PF5LH4VA
aws_secret_access_key = +gGgW9ElnjG73//uJxziR8dvgQzs++SFbR4at1Q9
aws_mfa_device = arn:aws:iam::xxxxxxxxxxxx:mfa/example.user
Note that the profile name has changed and that the “aws_mfs_device
” line has
been added.
With this configuration in place, you can request an session token:
$ aws-mfa
INFO - Validating credentials for profile: default
INFO - Short term credentials section default is missing, obtaining new credentials.
Enter AWS MFA code for device [arn:aws:iam::xxxxxxxxxxxx:mfa/example.user] (renewing for 43200 seconds): 466139
INFO - Fetching Credentials - Profile: default, Duration: 43200
INFO - Success! Your credentials will expire in 43200 seconds at: 2019-02-20 08:03:25+00:00
If you check your credentials file again, you’ll see that a “default
” profile
has been added.
[default]
assumed_role = False
aws_access_key_id = ASIARYSXPEDHNH5EOH3I
aws_secret_access_key = fcD3...+FL3A
aws_session_token = FQoG...seMF
aws_security_token = FQo...seMF
expiration = 2019-02-20 08:03:25
[default-long-term]
aws_access_key_id = AKIAIFFJM737PF5LH4VA
aws_secret_access_key = +gGgW9ElnjG73//uJxziR8dvgQzs++SFbR4at1Q9
aws_mfa_device = arn:aws:iam::xxxxxxxxxxxx:mfa/example.user
Your original credentials in the “default-long-term
” section are unchanged,
but the new “default
” profile has all the short lived credentials.
As you can see, the new profile has an “aws_session_token
” and an
“aws_security_token
” entry with the same token. Apparently this is to
support both boto and boto3
Once again the AWS CLI is working again:
$ aws s3 ls example.user.bucket
2019-02-19 20:23:44 3023 Vagrantfile
2019-02-19 20:23:42 372 config.yml
2019-02-19 20:23:44 206115 screenshot-1.png
The biggest difference with the manual workflow is that this takes a whole lot less typing and is much less error prone.
]]>