Commit 698f913b authored by Kevin Fishner's avatar Kevin Fishner

explain how packer works with atlas

parent fc6b78b8
...@@ -157,10 +157,13 @@ the Packer output. ...@@ -157,10 +157,13 @@ the Packer output.
Packer only builds images. It does not attempt to manage them in any way. Packer only builds images. It does not attempt to manage them in any way.
After they're built, it is up to you to launch or destroy them as you see After they're built, it is up to you to launch or destroy them as you see
fit. As a result of this, after running the above example, your AWS account fit. If you want to store and namespace images for easy reference, you
now has an AMI associated with it. can use [Atlas by HashiCorp](https://atlas.hashicorp.com). We'll cover
remotely building and storing images at the end of this getting started guide.
AMIs are stored in S3 by Amazon, so unless you want to be charged about $0.01 After running the above example, your AWS account
now has an AMI associated with it. AMIs are stored in S3 by Amazon,
so unless you want to be charged about $0.01
per month, you'll probably want to remove it. Remove the AMI by per month, you'll probably want to remove it. Remove the AMI by
first deregistering it on the [AWS AMI management page](https://console.aws.amazon.com/ec2/home?region=us-east-1#s=Images). first deregistering it on the [AWS AMI management page](https://console.aws.amazon.com/ec2/home?region=us-east-1#s=Images).
Next, delete the associated snapshot on the Next, delete the associated snapshot on the
......
...@@ -16,6 +16,9 @@ From this point forward, the most important reference for you will be ...@@ -16,6 +16,9 @@ From this point forward, the most important reference for you will be
the [documentation](/docs). The documentation is less of a guide and the [documentation](/docs). The documentation is less of a guide and
more of a reference of all the overall features and options of Packer. more of a reference of all the overall features and options of Packer.
If you're interested in learning more about how Packer fits into the
HashiCorp ecosystem of tools, read our [Atlas getting started overview](https://atlas.hashicorp.com/help/getting-started/getting-started-overview).
As you use Packer more, please voice your comments and concerns on As you use Packer more, please voice your comments and concerns on
the [mailing list or IRC](/community). Additionally, Packer is the [mailing list or IRC](/community). Additionally, Packer is
[open source](https://github.com/mitchellh/packer) so please contribute [open source](https://github.com/mitchellh/packer) so please contribute
......
---
layout: "intro"
page_title: "Remote Builds and Storage"
prev_url: "/intro/getting-started/vagrant.html"
next_url: "/intro/getting-started/next.html"
next_title: "Next Steps"
description: |-
Up to this point in the guide, you have been running Packer on your local machine to build and provision images on AWS and DigitalOcean. However, you can use Atlas by HashiCorp to both run Packer builds remotely and store the output of builds.
---
# Remote Builds and Storage
Up to this point in the guide, you have been running Packer on your local machine to build and provision images on AWS and DigitalOcean. However, you can use [Atlas by HashiCorp](https://atlas.hashicorp.com) to run Packer builds remotely and store the output of builds.
## Why Build Remotely?
By building remotely, you can move access credentials off of developer machines, release local machines from long-running Packer processes, and automatically start Packer builds from trigger sources such as `vagrant push`, a version control system, or CI tool.
## Run Packer Builds Remotely
To run Packer remotely, there are two changes that must be made to the Packer template. The first is the addition of the `push` [configuration](https://www.packer.io/docs/templates/push.html), which sends the Packer template to Atlas so it can run Packer remotely. The second modification is updating the variables section to read variables from the Atlas environment rather than the local environment. Remove the `post-processors` section for now if it is still in your template.
```javascript
{
"variables": {
"aws_access_key": "{{env `aws_access_key`}}",
"aws_secret_key": "{{env `aws_secret_key`}}"
},
"builders": [{
"type": "amazon-ebs",
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
"region": "us-east-1",
"source_ami": "ami-9eaa1cf6",
"instance_type": "t2.micro",
"ssh_username": "ubuntu",
"ami_name": "packer-example {{timestamp}}"
}],
"provisioners": [{
"type": "shell",
"inline": [
"sleep 30",
"sudo apt-get update",
"sudo apt-get install -y redis-server"
]
}],
"push": {
"name": "ATLAS_USERNAME/packer-tutorial"
}
}
```
To get an Atlas username, [create an account here](https://atlas.hashicorp.com/account/new?utm_source=oss&utm_medium=getting-started&utm_campaign=packer). Replace "ATLAS_USERNAME" with your username, then run `packer push -create example.json` to send the configuration to Atlas, which automatically starts the build.
This build will fail since neither `aws_access_key` or `aws_secret_key` are set in the Atlas environment. To set environment variables in Atlas, navigate to the [operations tab](https://atlas.hashicorp.com/operations), click the "packer-tutorial" build configuration that was just created, and then click 'variables' in the left navigation. Set `aws_access_key` and `aws_secret_key` with their respective values. Now restart the Packer build by either clicking 'rebuild' in the Atlas UI or by running `packer push example.json` again. Now when you click on the active build, you can view the logs in real-time.
-> **Note:** Whenever a change is made to the Packer template, you must `packer push` to update the configuration in Atlas.
## Store Packer Outputs
Now we have Atlas building an AMI with Redis pre-configured. This is great, but it's even better to store and version the AMI output so it can be easily deployed by a tool like [Terraform](https://terraform.io). The `atlas` [post-processor](/docs/post-processors/atlas.html) makes this process simple:
```javascript
{
"variables": ["..."],
"builders": ["..."],
"provisioners": ["..."],
"push": ["..."]
"post-processors": [
{
"type": "atlas",
"artifact": "ATLAS_USERNAME/packer-tutorial",
"artifact_type": "aws.ami"
}
]
}
```
Update the `post-processors` block with your Atlas username, then `packer push example.json` and watch the build kick off in Atlas! When the build completes, the resulting artifact will be saved and stored in Atlas.
\ No newline at end of file
...@@ -2,8 +2,8 @@ ...@@ -2,8 +2,8 @@
layout: "intro" layout: "intro"
page_title: "Vagrant Boxes" page_title: "Vagrant Boxes"
prev_url: "/intro/getting-started/parallel-builds.html" prev_url: "/intro/getting-started/parallel-builds.html"
next_url: "/intro/getting-started/next.html" next_url: "/intro/getting-started/remote-builds.html"
next_title: "Next Steps" next_title: "Remote Builds and Storage"
description: |- description: |-
Packer also has the ability to take the results of a builder (such as an AMI or plain VMware image) and turn it into a Vagrant box. Packer also has the ability to take the results of a builder (such as an AMI or plain VMware image) and turn it into a Vagrant box.
--- ---
......
...@@ -17,6 +17,7 @@ ...@@ -17,6 +17,7 @@
<li><a href="/intro/getting-started/provision.html">Provision</a></li> <li><a href="/intro/getting-started/provision.html">Provision</a></li>
<li><a href="/intro/getting-started/parallel-builds.html">Parallel Builds</a></li> <li><a href="/intro/getting-started/parallel-builds.html">Parallel Builds</a></li>
<li><a href="/intro/getting-started/vagrant.html">Vagrant Boxes</a></li> <li><a href="/intro/getting-started/vagrant.html">Vagrant Boxes</a></li>
<li><a href="/intro/getting-started/remote-builds.html">Remote Builds</a></li>
<li><a href="/intro/getting-started/next.html">Next Steps</a></li> <li><a href="/intro/getting-started/next.html">Next Steps</a></li>
</ul> </ul>
<% end %> <% end %>
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment