Skip to content
Home » AWS Solutions Architect Questions and Answers Part-15

AWS Solutions Architect Questions and Answers Part-15

    AWS Solutions Architect Questions and Answers Part-15

    Get ready to excel in your AWS Solutions Architect certification with this comprehensive collection of questions and answers. Covering critical topics like cloud architecture design, AWS services, security best practices, and cost optimization, these Q&A sessions will help you gain a deep understanding of AWS concepts and prepare effectively for the exam. Whether you are a beginner or an experienced professional, these answers provide clear explanations and practical examples to solidify your AWS knowledge and boost your confidence.

     

    1. Your firm has uploaded a large amount of aerial image data to S3 In the past, in your on-premises environment, you used a dedicated group of servers to oaten process this data and used Rabbit MQ – An open source messaging system to get job information to the servers. Once processed the data would go to tape and be shipped offsite. Your manager told you to stay with the current design, and leverage AWS archival storage and messaging services to minimize cost. Which is correct?

    A. Use SQS for passing job messages use Cloud Watch alarms to terminate EC2 worker instances when they become idle. Once data is processed, change the storage class of the S3 objects to Reduced Redundancy Storage.
     
    B. Setup Auto-Scaled workers triggered by queue depth that use spot instances to process messages in SOS Once data is processed,
     
    C. Change the storage class of the S3 objects to Reduced Redundancy Storage. Setup Auto-Scaled workers triggered by queue depth that use spot instances to process messages in SQS Once data is processed, change the storage class of the S3 objects to Glacier.
     
    D. Use SNS to pass job messages use Cloud Watch alarms to terminate spot worker instances when they become idle. Once data is processed, change the storage class of the S3 object to Glacier.
     

    B. Setup Auto-Scaled workers triggered by queue depth that use spot instances to process messages in SOS Once data is processed,

    Click to know the Answer Collapse

    2. Your company has HQ in Tokyo and branch offices all over the world and is using a logistics software with a multi-regional deployment on AWS in Japan, Europe and USA. The logistic software has a 3-tier architecture and currently uses MySQL 5.6 for data persistence. Each region has deployed its own database

    In the HQ region you run an hourly batch process reading data from every region to compute cross-regional reports that are sent by email to all offices this batch process must be completed as fast as possible to quickly optimize logistics how do you build the database architecture in order to meet the requirements’?

    A. For each regional deployment, use RDS MySQL with a master in the region and a read replica in the HQ region
     
    B. For each regional deployment, use MySQL on EC2 with a master in the region and send hourly EBS snapshots to the HQ region
     
    C. For each regional deployment, use RDS MySQL with a master in the region and send hourly RDS snapshots to the HQ region
     
    D. For each regional deployment, use MySQL on EC2 with a master in the region and use S3 to copy data files hourly to the HQ region
     
    E. Use Direct Connect to connect all regional MySQL deployments to the HQ region and reduce network latency for the batch process
     
     
    A. For each regional deployment, use RDS MySQL with a master in the region and a read replica in the HQ region
     
    Click to know the Answer Collapse

    3. An enterprise wants to use a third-party SaaS application. The SaaS application needs to have access to issue several API commands to discover Amazon EC2 resources running within the enterprise’s account The enterprise has internal security policies that require any outside access to their environment must conform to the principles of least privilege and there must be controls in place to ensure that the credentials used by the SaaS vendor cannot be used by any other third party. Which of the following would meet all of these conditions?

    A. From the AWS Management Console, navigate to the Security Credentials page and retrieve the access and secret key for your account.
     
    B. Create an IAM user within the enterprise account assign a user policy to the IAM user that allows only the actions required by the SaaS application create a new access and secret key for the user and provide these credentials to the SaaS provider.
     
    C. Create an IAM role for cross-account access allows the SaaS provider’s account to assume the role and assign it a policy that allows only the actions required by the SaaS application.
     
    D. Create an IAM role for EC2 instances, assign it a policy mat allows only the actions required tor the Saas application to work, provide the role ARM to the SaaS provider to use when launching their application instances.
     
     

    C. Create an IAM role for cross-account access allows the SaaS provider’s account to assume the role and assign it a policy that allows only the actions required by the SaaS application. 

    Click to know the Answer Collapse

    4. A web company is looking to implement an external payment service into their highly available application deployed in a VPC Their application EC2 instances are behind a public lacing ELB Auto scaling is used to add additional instances as traffic increases under normal load the application runs 2 instances in the Auto Scaling group but at peak it can scale 3x in size. The application instances need to communicate with the payment service over the Internet which requires whitelisting of all public IP addresses used to communicate with it. A maximum of 4 whitelisting IP addresses are allowed at a time and can be added through an API.

    How should they architect their solution?

    A. Route payment requests through two NAT instances setup for High Availability and whitelist the Elastic IP addresses attached to the MAT instances.
     
    B. Whitelist the VPC Internet Gateway Public IP and route payment requests through the Internet Gateway.
     
    C. Whitelist the ELB IP addresses and route payment requests from the Application servers through the ELB.
     
    D. Automatically assign public IP addresses to the application instances in the Auto Scaling group and run a script on boot that adds each instances public IP address to the payment validation whitelist API.
     
     

    A. Route payment requests through two NAT instances setup for High Availability and whitelist the Elastic IP addresses attached to the MAT instances.

    Click to know the Answer Collapse

    5. You have a periodic Image analysis application that gets some files In Input analyzes them and tor each file writes some data in output to a ten file the number of files in input per day is high and concentrated in a few hours of the day.

    Currently you have a server on EC2 with a large EBS volume that hosts the input data and the results it takes almost 20 hours per day to complete the process

    What services could be used to reduce the elaboration time and improve the availability of the solution?

    A. S3 to store I/O files. SQS to distribute elaboration commands to a group of hosts working in parallel. Auto scaling to dynamically size the group of hosts depending on the length of the SQS queue
     
    B. EBS with Provisioned IOPS (PIOPS) to store I/O files. SNS to distribute elaboration commands to a group of hosts working in parallel Auto Scaling to dynamically size the group of hosts depending on the number of SNS notifications
     
    C. S3 to store I/O files, SNS to distribute evaporation commands to a group of hosts working in parallel. Auto scaling to dynamically size the group of hosts depending on the number of SNS notifications
     
    D. EBS with Provisioned IOPS (PIOPS) to store I/O files SOS to distribute elaboration commands to a group of hosts working in parallel Auto Scaling to dynamically size the group ot hosts depending on the length of the SQS queue.
     
     

    A. S3 to store I/O files. SQS to distribute elaboration commands to a group of hosts working in parallel. Auto scaling to dynamically size the group of hosts depending on the length of the SQS queue

    Click to know the Answer Collapse

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    Need Help?
    Call Now