Skip to content
Home » AWS Solutions Architect Questions and Answers Part- 21

AWS Solutions Architect Questions and Answers Part- 21

    AWS Solutions Architect Questions and Answers Part- 21

    Get ready to excel in your AWS Solutions Architect certification with this comprehensive collection of questions and answers. Covering critical topics like cloud architecture design, AWS services, security best practices, and cost optimization, these Q&A sessions will help you gain a deep understanding of AWS concepts and prepare effectively for the exam. Whether you are a beginner or an experienced professional, these answers provide clear explanations and practical examples to solidify your AWS knowledge and boost your confidence.

    Test Your Skills

    1. A web-startup runs its very successful social news application on Amazon EC2 with an Elastic Load Balancer, an Auto-Scaling group of Java/Tomcat application-servers, and DynamoDB as data store. The main web-application best runs on m2 x large instances since it is highly memory- bound Each new deployment requires semi-automated creation and testing of a new AMI for the application servers which takes quite a while ana is therefore only done once per week.

    Recently, a new chat feature has been implemented in nodejs and wails to be integrated in the architecture. First tests show that the new component is CPU bound Because the company has some experience with using Chef, they decided to streamline the deployment process and use AWS Ops Works as an application life cycle tool to simplify management of the application and reduce the deployment cycles.

    What configuration in AWS Ops Works is necessary to integrate the new chat module in the most cost-efficient and flexible way?

    A. Create one AWS Ops Works stack, create one AWS Ops Works layer, create one custom recipe
     
    B. Create one AWS Ops Works stack create two AWS Ops Works layers create one custom recipe
     
    C. Create two AWS Ops Works stacks create two AWS Ops Works layers create one custom recipe
     
    D. Create two AWS Ops Works stacks create two AWS Ops Works layers create two custom recipe
     
     

    Answer: B 

    B. Create one AWS Ops Works stack create two AWS Ops Works layers create one custom recipe

    click to know answer Collapse

    2. An ERP application is deployed across multiple AZs in a single region. In the event of failure, the Recovery Time Objective (RTO) must be less than 3 hours, and the Recovery Point Objective (RPO) must be 15 minutes the customer realizes that data corruption occurred roughly 1.5 hours ago.

    What DR strategy could be used to achieve this RTO and RPO in the event of this kind of failure?

     

    A.Take hourly DB backups to S3, with transaction logs stored in S3 every 5 minutes.
     
    B.Use synchronous database master-slave replication between two availability zones.
     
    C.Take hourly DB backups to EC2 Instance store volumes with transaction logs stored In S3 every 5 minutes.
     
    D.Take 15 minute DB backups stored In Glacier with transaction logs stored in S3 every 5 minutes.
     
     

    Answer: A

    A. Take hourly DB backups to S3, with transaction logs stored in S3 every 5 minutes.

    click to know answer Collapse

    3. Your application is using an ELB in front of an Auto Scaling group of web/application servers deployed across two AZs and a Multi-AZ RDS Instance for data persistence.

    The database CPU is often above 80% usage and 90% of I/O operations on the database are reads. To improve performance you recently added a single-node Memcached ElastiCache Cluster to cache frequent DB query results. In the next weeks the overall workload is expected to grow by 30%.

    Do you need to change anything in the architecture to maintain the high availability or the application with the anticipated additional load’* Why?

     
    A. Yes. you should deploy two Memcached ElastiCache Clusters in different AZs because the ROS Instance will not Be able to handle the load It me cache node fails.
     
    B. No. if the cache node fails the automated ElastiCache node recovery feature will prevent any availability impact.
     
    C. Yes you should deploy the Memcached ElastiCache Cluster with two nodes in the same AZ as the RDS DB master instance to handle the load if one cache node fails.
     
    D. No if the cache node fails you can always get the same data from the DB without having any availability impact.
     
     
    Answer: A
     
    A. Yes. you should deploy two Memcached ElastiCache Clusters in different AZs because the ROS Instance will not Be able to handle the load It me cache node fails.
     
    click to know answer Collapse

    4. A customer has a 10 GB AWS Direct Connect connection to an AWS region where they have a web application hosted on Amazon Elastic Computer Cloud (EC2). The application has dependencies on an on-premises mainframe database that uses a BASE (Basic Available. Sort stale Eventual consistency) rather than an ACID (Atomicity. Consistency isolation. Durability) consistency model. The application is exhibiting undesirable behavior because the database is not able to handle the volume of writes. How can you reduce the load on your on-premises database resources in the most cost-effective way?

    A. Use an Amazon Elastic Map Reduce (EMR) S3DistCp as a synchronization mechanism between the on-premises database and a Hadoop cluster on AWS.
     
    B. Modify the application to write to an Amazon SQS queue and develop a worker process to flush the queue to the on-premises database.
     
    C.Modify the application to use DynamoDB to feed an EMR cluster which uses a map function to write to the on-premises database.
     
    D. Provision an RDS read-replica database on AWS to handle the writes and synchronize the two databases using Data Pipeline.
     
     
     
     
     

    Answer: B

    B. Modify the application to write to an Amazon SQS queue and develop a worker process to flush the queue to the on-premises database.

    click to know answer Collapse

    5. Your company previously configured a heavily used, dynamically routed VPN connection between your on-premises data center and AWS. You recently provisioned a DirectConnect connection and would like to start using the new connection. After configuring DirectConnect settings in the AWS Console, which of the following options win provide the most seamless transition for your users?

    A. Delete your existing VPN connection to avoid routing loops configure your DirectConnect router with the appropriate settings and verity network traffic is leveraging DirectConnect.
     
    B. Configure your DireclConnect router with a higher 8GP priority man your VPN router, verify network traffic is leveraging Directconnect and then delete your existing VPN connection.
     
    C. Update your VPC route tables to point to the DirectConnect connection configure your DirectConnect router with the appropriate settings verify network traffic is leveraging DirectConnect and then delete the VPN connection.
     
    D. Configure your DireclConnect router, update your VPC route tables to point to the DirectConnect connection, configure your VPN connection with a higher BGP pointy. And verify network traffic is leveraging the DirectConnect connection.
     
     

    Answer: A

    A. Delete your existing VPN connection to avoid routing loops configure your DirectConnect router with the appropriate settings and verity network traffic is leveraging DirectConnect.

    click to know answer Collapse
    Need Help?
    Call Now