An overview of AppScale logical architecture, deployment models, and how to get to a full solution.
AppScale allows you to create your own private AWS region. From a technical perspective, this means that an AppScale deployment exposes the API endpoints that can then be used by tools and the CLI to connect to it, as they would to AWS regions. Each tool has its own way to specify the endpoints and region. For example, to use the AWS CLI with AppScale you would modify the configuration to add the appscale-region profile pointing to your deployment. You can then interact with the AppScale region as usual.
This tier is the front-end of the system, handling incoming requests, storing cloud-wide data, and implementing all operations that aren't specific to an Availability Zone (AZ). Endpoints for services, such as EC2, S3, IAM, and others are implemented by the User-Facing Services (UFS) component, which is where AWS API syntax is implemented. UFS delegates S3 requests to the Object Storage Provider (OSP), which relies on storage backends to implement S3. The state is managed by the Cloud Controller (CLC), which relies on a relational database for persistence. UFS can be replicated and separated from the CLC and OSP for scalability and availability.
This tier implements operations that are specific to an AZ. These operations include block storage management, implemented by the Storage Controller (SC), as well as AZ-specific instance scheduling and network configuration distribution, implemented by the Cluster Controller (CC). There is one SC+CC pair for each AZ. Although SC and CC can be co-located with Region Tier components, best practices for performance suggest that the pair runs on a dedicated server.
This tier is composed exclusively of individual compute nodes, which logically reside in one of the AZs. Node Controllers (NCs) run on each compute node, managing execution of virtual machines and provisioning them with network connectivity and block storage, either node-local or from the Storage Controller. NCs also collect instance performance metrics, which are propagated to the Region Tier for storage and client use via the CloudWatch service
Deployment Configurations
Configuration of AppScale is very flexible, and although custom configurations are always possible, we categorized deployments in Small, Medium, and Large. An AppScale deployment will draw some characteristics from the underlying hardware, for example for the specific network, CPU, and storage performance, as well as for the capacity, and also for the production readiness, so choosing the appropriate size will bring the best performance per dollar.

This is a single-node deployment that cannot be expanded, intended for proof-of-concept setups to demonstrate the features of AppScale. Control plane, Storage, and Compute subsystems are all co-located. All features of AppScale are available, but the performance and capacity of the system are limited. Failure of the node may result in total loss of data (note: use of external persistent volumes would prevent this).
Deployment with two or more control-plane nodes (at least one CLC node and one AZC node) and three or more Compute nodes that double as Storage nodes. In this configuration compute and storage capabilities scale in tandem, allowing cost-effective deployments of a range of sizes. There is redundancy for outside connectivity and for data (block and object).
Deployment with two or more control-plane nodes (one CLC node and one or more AZC nodes), three or more dedicated Storage nodes, and one or more dedicated Compute nodes. In this configuration compute and storage capabilities can be scaled separately, at the additional cost of dedicated Storage nodes. There is redundancy for outside connectivity and for data (block and object).
Is your AWS workload ready for AppScale?
Learn more about the characteristics that make your AWS workloads a great fit for AppScale