AppScale on Equinix Metal™

Create a trial one-node proof-of-concept deployment of AppScale on Equinix Metal™

Try our free, self-service, one-node proof-of-concept trial to evaluate AppScale on your own. This guide describes how to create a simple one-node proof-of-concept deployment of AppScale on Equinix Metal™. The outcome will be a fully functional private region of AWS, albeit without many resources. The deployment comes with an easy-to-use GUI. You can use the AWS CLI, too, if you prefer, and many other AWS-compatible tools.

Prerequisites

  • An account on Equinix Metal™, including:
    • API key (for our code to configure a server automatically)
    • SSH key (so you can SSH into the server we create)
    • A block of public IPs (one /28 block of "Public IPv4 addresses" will work)
    • Don't forget to enter a discount code if you received one
  • On a host from which you'll drive installation (e.g., your laptop)


Deployment of the PoC

On the machine where you installed Terraform, clone our GitHub repository:

git clone https://github.com/AppScale/ats-packet

Change the working directory to where the PoC deployment configuration resides

cd ats-packet/provision/small

Create a file named terraform.tfvars with the values from your Equinix Metal™ account (the API token, the project ID, and the CIDR). For example:

auth_token = "ho7e43EKzRoQLg133eWQsMvYM87EiUpu"
project_id = "62893b3e-1e7d-43ff-e7bd-cc3d3b04c5ed"
public_ip_cidr = "172.16.203.176/28"

Note: the default datacenter it will use is SJC1, but you can change it by adding the appropriate datacenter using the facility variable in terraform.tfvars. Be sure the CIDR belongs to the right datacenter.

Prepare the Terraform environment first:

terraform init

Make sure the initialization was correct: you should see something like the below message:

Terraform has been successfully initialized!

with a few more instructions from Terraform if the initialization was successful. You can now deploy the plan with

terraform apply

Upon confirmation of the command, the process will start. After a few minutes, you will have the confirmation of completion, something like:

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.

Outputs:

console_account = poc
console_location = https://console.ats-147-75-78-49.euca.me/
console_password = AUgXaaFPrf
console_user = admin
ssh_command = ssh root@ssh.ats-147-75-78-49.euca.me

The output will be specific to your deployment. Once you see the above you will have to wait a few more minutes as the installation of ATS completes. As they say, this is a good time to get a coffee since the console may come up before everything completes. You can check for completion by looking at the end of the provision.log file on the server:

ssh root@ssh.ats-147-75-78-49.euca.me tail -4 provision.log

You are looking for something like

PLAY RECAP *********************************************************************
ats : ok=109 changed=44 unreachable=0 failed=0 skipped=30 rescued=0 ignored=5
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

In case you forget any of the outputs that Terraform produced, you can also find them using

terraform show

Upon completion of ATS installation you will be able to access the console using the above password with the poc account and the admin user:

 

Post-Deployment

Congratulations: you have now created your own AWS private region. The admin username, as the name suggests, is the administrator of the poc account, thus will be allowed to create users, groups, policies, etc… within the poc account. You can explore the console: the operation should be familiar to any AWS user.

If you prefer to access the aws command line, you need to ssh into the node as root (look at the output variable of the Terraform plan). The aws command line has been automatically installed and configured and you can use any of the commands you are familiar to interact with the deployment.

For example:

aws ec2 describe-instances

We have created a post-deployment ansible playbook that creates some AWS artifacts (e.g. volume or users) in case you want to follow the self-guided tour of ATS.  Otherwise, you can simply delete them.

To install virtual machine images (EMIs) on your cloud ssh into the node as root and run:

eucalyptus-images

Follow the instructions to select and install an image from a curated list of popular open-source distributions. Be sure to note the “Login” for the images you install, you will need this to ssh into any instances you launch. Use the Management Console or command-line tools to configure security groups, add ssh keypairs, and launch your instances.

No deployment is complete without instructions on how to terminate it. To terminate the deployment simply run:

terraform destroy


Limitation of PoC

The PoC is running on a single node, this means there are limitations in terms of how many virtual machines and buckets/volumes you can use before running out of capacity. For more expansive PoC (multinode) environments, feel free to contact us. This automated installation does not configure a DNS domain to be used by the ATS deployment, hence some endpoints do not work correctly: in particular, any load balancer created in the deployment will not work correctly since there is no DNS delegation for the domain.

The private region you will get supports all the core AWS APIs, but not all of the AWS APIs are supported. Contact us on our community slack channel, or email to info@ppscale.com to inquire about APIs your workload needs.