AWS Architect Certification Training (87 Blogs) Become a Certified Professional
AWS Global Infrastructure

Cloud Computing

Topics Covered
  • AWS Architect Certification Training (78 Blogs)
  • AWS Development (7 Blogs)
  • SFDC Administration Foundation (1 Blogs)
  • Salesforce Admin and Dev Foundation (22 Blogs)
SEE MORE

Top 110+ AWS Interview Questions and Answers for 2024

Last updated on Oct 14,2024 585.8K Views

A tech geek who is always interested in new technologies. A tech geek who is always interested in new technologies.
1 / 1 Blog from AWS Interview Questions

AWS has been rated a Leader in the 2022 Magic Quadrant for Cloud Infrastructure and Platform Services for the twelfth time in a row (CIPS). According to Gartner, AWS is the longest-running CIPS Magic Quadrant Leader. 48% of enterprises plan to keep spending steadily on cloud, according to IDC research. Undoubtedly, the AWS Solution Architect position is one of the most sought-after among IT jobs. 

Cloud computing market by region

You, too, can maximize the Cloud computing career opportunities that are sure to come your way by taking AWS Certification with Edureka. In this blog, we are here to set you up for your next interview with AWS Interview Questions and Answers

Why AWS Interview Questions?

With regard to AWS, a Solution Architect would design and define AWS architecture for existing systems, migrate them to cloud architectures, and develop technical road maps for future AWS cloud implementations. So, through this article, I will bring you the top and frequently asked AWS interview questions. Gain proficiency in designing, planning, and scaling cloud implementation with the AWS Masters Program.

The following is the outline of this article:

AWS Interview Questions And Answers 2024 | AWS Solution Architect Training | Edureka

In this video, you will get to know the questions which you may face in the interview, the concepts explained here are essential for any Solution Architect in the making.

Let us begin by looking into the basic AWS interview questions and answers.

 

Section 1: Basic AWS Interview Questions

 

1. What is Cloud Computing? Can you talk about and compare any two popular Cloud Service Providers?

For a detailed discussion on this topic, please refer to our Cloud Computing blog. Following is the comparison between two of the most popular Cloud Service Providers:

Amazon Web Services Vs Microsoft Azure

ParametersAWSAzure
 Initiation 2006 2010
 Market Share4x x
 Implementation Less Options More Experimentation Possible
 Features Widest Range Of Options Good Range Of Options
 App Hosting AWS not as good as Azure Azure Is Better
 Development Varied & Great Features Varied & Great Features
 IaaS Offerings Good Market Hold Better Offerings than AWS

Related Learning: AWS vs Azure and Google Cloud vs AWS or Detailed AWS vs Azure vs Google Cloud Comparison here!

2.. Try this AWS scenario based interview question. I have some private servers on my premises, also I have distributed some of my workload on the public cloud, what is this architecture called?

  1. Virtual Private Network
  2. Private Cloud
  3. Virtual Private Cloud
  4. Hybrid Cloud

Answer D.

Explanation: This type of architecture would be a hybrid cloud. Why? Because we are using both the public cloud and your on-premises servers, i.e., the private cloud. To make this hybrid architecture easy to use, wouldn’t it be better if your private and public cloud were all on the same network(virtually)? This is established by including your public cloud servers in a virtual private cloud and connecting this virtual cloud with your on-premise servers using a VPN(Virtual Private Network).

With the Google Cloud Platform Course, you will learn to design, develop, and manage a robust, secure, and highly available cloud-based solution for your organization’s needs.

3. What is Auto-scaling?

Auto-scaling is a feature that allows you to provision and launch new instances based on demand. It enables you to raise or reduce resource capacity in response to demand automatically.

4. What is geo-targeting in CloudFront?

Geo-targeting is a concept in which businesses may deliver customized information to their audience depending on their geographic location without altering the URL. This allows you to produce personalized content for a specific geographical audience while keeping their demands in mind.

5. Define and explain the three initial orders of all services and the AWS products erected on them. 

There are three primary types of cloud services: computing, storage, and networking. 

Then, there are AWS products built based on the three orders for all services. Computing services such as EC2, Elastic Beanstalk, Lambda, Auto-Scaling, and Lightsail are exemplifications.   S3, Glacier, Elastic Block Storage, and the Elastic File System exemplify the storage. VPC, Amazon CloudFront, and Route53 are exemplifications of networking services.

6. What are the steps involved in a CloudFormation Solution?

  • Create a new CloudFormation template or utilize an existing one in JSON or YAML format.
  • Save the code in an S3 bucket, which will act as a repository for it.
  • To call the bucket and construct a stack on your template, use AWS CloudFormation.
  • CloudFormation scans the file and understands the services called, their sequence, and the relationships between them before provisioning them one by one.

7. What are the main features of Cloud Computing?

Cloud computing has the following key characteristics:

  1. Massive amounts of computing resources can be provisioned quickly.
  2. Resources can be accessed from any location with an internet connection due to its location independence.
  3. Unlike physical devices, cloud storage has no capacity constraints, which makes it very efficient for storage.
  4. Multi-Tenancy allows a large number of users to share resources.
  5. Data backup and disaster recovery are becoming easier and less expensive with cloud computing.
  6. Its Scalability enables businesses to scale up and scale down as needed with cloud computing.

8. Explain AWS.

AWS is an abbreviation for Amazon Web Services, which is a collection of remote computing services also known as Cloud Computing.  This technology is also known as IaaS or Infrastructure as a Service.

9. Name some of the non-regional AWS services. 

Some of the non-regional AWS services. 

  1. CloudFront 
  2. IAM
  3. Route 53 
  4. Web Application Firewall

10. What are the different layers that define cloud architecture?

The following are the various layers operated by cloud architecture:

  • CLC or Cloud Controller.
  • Cluster Controller 
  •  SC or Storage Controller
  • NC, or Node Controller
  • Walrus

11. What are the tools and techniques that you can use in AWS to identify if you are paying more than you should be, and how to correct it?

You may ensure that you are paying the proper amount for the resources you use by utilizing the following resources:

  • Check out the Top Services Table- It is a dashboard in the expense management interface that displays the top five most used services. This will show you how much money you are spending on the resources in question.
  • Cost Finder—Cost explorer programs are available that allow you to see and evaluate your consumption expenditures over the previous 13 months and receive a cost prediction for the next three months.
  • AWS Budgets- This helps you create a budget for the services. It will also allow you to see if the current plan suits your budget and the specifics of how you utilize the services.
  • Cost Allocation Labels—This aids in determining which resource has cost the most in a given month. It allows you to categorize your resources and cost allocation tags to keep track of your AWS charges.

12. What are the various layers of cloud computing? Explain their work.

Cloud computing categories have various layers that include

  1. Infrastructure as a Service (IaaS) is the on-demand provision of services such as servers, storage, networks, and operating systems.
  2. Platform as a Service (PaaS) combines IaaS with an abstracted collection of middleware services, software development, and deployment tools. 
  3. PaaS also enables developers to create web or mobile apps in the cloud quickly.
  4. Software as a Service (SaaS) is a software application delivered on-demand in a multi-tenant model.
  5. Function as a Service (FaaS) enables end users to build and execute app functionalities on a serverless architecture.

13. What are the various Cloud versions?

There are several models for deploying cloud services:

  1. The public cloud is a collection of computer resources such as hardware, software, servers, storage, and so on that are owned and operated by third-party cloud providers for use by businesses or individuals.
  2. A private cloud is a collection of resources owned and managed by an organization for use by its employees, partners, or customers.
  3. A hybrid cloud combines public and private cloud services.

14. Is there any other alternative tool to log into the cloud environment other than the console?

The following will help you in logging into AWS resources:

  • Putty
  • AWS CLI for Linux
  • AWS CLI for Windows
  • AWS CLI for Windows CMD
  • AWS SDK
  • Eclipse

15. What are the native AWS Security logging capabilities?

Most AWS services provide logging capabilities. AWS CloudTrail, AWS Config, and others, for example, have account-level logging. Let’s look at two specific services:

AWS CloudTrail
This service provides a history of AWS API calls for each account. It also allows you to undertake security analysis, resource change tracking, and compliance audits on your AWS environment. A nice aspect of this service is that you can set it to send notifications via AWS SNS when fresh logs are provided.

AWS Setup
This service helps you comprehend the configuration changes that occur in your environment. It offers an AWS inventory that contains configuration history, configuration change notifications, and links between AWS resources. It may also be set to send notifications via AWS SNS when fresh logs are received.

16. What is a DDoS attack, and what services can minimize them?

DDoS is a cyber-attack in which the culprit visits a website and creates several sessions, preventing genuine users from accessing the service. The following native tools will help you in preventing DDoS attacks on your AWS services:

  • AWS Shield
  • AWS WAF
  • AAmazon CloudFront
  • Amazon Route53
  • ELB
  • VPC

 16. List the pros and cons of serverless computing.

Advantages:

  1. Cost-effective
  2. Operations have been simplified.
  3. Improves Productivity
  4. Scalable

Disadvantages:

  1. This can result in response latency
  2. Due to resource constraints, it is not suitable for high-computing operations.
  3. Not very safe.
  4. Debugging can be difficult.

17. What characteristics distinguish cloud architecture from traditional cloud architecture?

The characteristics are as follows:

  • In the cloud, hardware requirements are met based on the demand generated by cloud architecture.
  • When there is a demand for resources, cloud architecture can scale them up.
  • Cloud architecture can manage and handle dynamic workloads without a single point of failure.

Earn Cloud Architect Certification and become certified.

18. What are AWS’s featured services?

AWS’s key components are as follows:

  • Elastic compute cloud (EC2): This is a computing resource that is available on demand for hosting applications. It is especially useful in times of uncertain workloads.
  • Route 53:  It is a web-based DNS service.
  • Simple Storage Device S3: This is a storage device service that is widely used in AWS Identity and Access Management.
  • Elastic Block Store: It is integrated with EC2 and allows you to store constant volumes of data and persist data.
  • Cloud Watch allows you to monitor AWS’s critical areas and even set reminders for troubleshooting.
  • Simple Email Service: It allows you to send emails using regular SMTP or a restful API call.

 

Section 2: AWS Interview Questions and Answers for Amazon EC2 

 

For a detailed discussion on this topic, please refer to our EC2 AWS blog.

19. What does the following command do with respect to the Amazon EC2 security groups?

ec2-create-group CreateSecurityGroup

  1. Groups: The user created security groups and added them to a new group for easy access.
  2. Creates a new security group for use with your account.
  3. Creates a new group inside the security group.
  4. Creates a new rule inside the security group.

Answer B.

Explanation: A Security group is just like a firewall. It controls the traffic in and out of your instance. In AWS terms, the inbound and outbound traffic is the command mentioned, which is pretty straightforward; it says to create a security group and does the same. Moving along, once your security group is created, you can add different rules to it. For example, you have an RDS instance, to access it, you have to add the public IP address of the machine from which you want to access the instance in its security group.

20. Here is an AWS scenario-based interview question. You have a video transcoding application. The videos are processed according to a queue. If the processing of a video is interrupted in one instance, it is resumed in another instance. Currently, there is a huge backlog of videos that need to be processed. For this, you need to add more instances, but you need these instances only until your backlog is reduced. Which of these would be an efficient way to do it?

It would be best if you were using an On Demand instance for the same. Why? First of all, the workload has to be processed now, meaning it is urgent. Secondly, you don’t need them once your backlog is cleared; therefore, Reserved Instance is out of the picture, and since the work is urgent, you cannot stop the work on your instance just because the spot price spiked; therefore, Spot Instances shall also not be used. Hence, on-demand instances are the right choice in this case.

Related Learning: AWS Well Architected Framework

21. You have a distributed application that periodically processes large volumes of data across multiple Amazon EC2 Instances. The application is designed to recover gracefully from Amazon EC2 instance failures. You are required to accomplish this task most cost-effectively.

Which of the following will meet your requirements?

  1. Spot Instances
  2. Reserved instances
  3. Dedicated instances
  4. On-Demand instances

Answer: A

Explanation: Since the work we are addressing here is not continuous, a reserved instance shall be idle at times. The same goes with on-demand instances. Also, it does not make sense to launch an on-Demand instance whenever work comes up since it is expensive. Hence, Spot Instances will be the right fit because of their low rates and no long-term commitments.

Top 10 Trending Technologies to Learn in 2024 | Edureka

This video talks about the Top 10 Trending Technologies in 2024 that you must learn.

Check out our AWS Certification Training in Top Cities

IndiaUnited StatesOther Countries
AWS Training in HyderabadAWS Training in AtlantaAWS Training in London
AWS Training in BangaloreAWS Training in BostonAWS Training in Adelaide
AWS Training in ChennaiAWS Training in NYCAWS Training in Singapore

For a detailed, You can even check out the details of Migrating to AWS with the AWS Cloud Migration Training.

22. How are stopping and terminating instances different from each other?

Starting, stopping and terminating are the three states in an EC2 instance, let’s discuss them in detail:

  • Stopping and Starting an instance: When an instance is stopped, it performs a normal shutdown and then transitions to a stopped state. All of its Amazon EBS volumes remain attached, and you can start the instance again later. You are not charged for additional instance hours while the instance is in a stopped state.
  • Terminating an instance: When an instance is terminated, it performs a normal shutdown, and then the attached Amazon EBS volumes are deleted unless the volume’s deleteOnTermination attribute is set to false. The instance itself is also deleted, and you can’t start it again at a later time.

23. If I want my instance to run on single-tenant hardware, which value do I have to set the instance’s tenancy attribute to?

  1. Dedicated
  2. Isolated
  3. One
  4. Reserved

Answer A.

Explanation: The Instance tenancy attribute should be set to Dedicated Instance. The rest of the values are invalid.

24. When will you incur costs with an Elastic IP address (EIP)?

  1. When an EIP is allocated.
  2. When it is allocated and associated with a running instance.
  3. When it is allocated and associated with a stopped instance.
  4. Costs are incurred regardless of whether the EIP is associated with a running instance.

Answer C.

Explanation: You are not charged if only one Elastic IP address is attached with your running instance. But you do get charged in the following conditions:

  • When you use more than one Elastic IPs with your instance.
  • When your Elastic IP is attached to a stopped instance.
  • When your Elastic IP is not attached to any instance.

25. How is a Spot instance different from an On-Demand instance or Reserved Instance?

First of all, let’s understand that Spot Instances, On-Demand instances, and Reserved Instances are all models for pricing. Moving along, spot instances provide customers the ability to purchase compute capacity with no upfront commitment at hourly rates usually lower than the on-demand rate in each region. Spot instances are just like bidding, the bidding price is called Spot Price. The Spot Price fluctuates based on supply and demand, for instance, but customers will never pay more than the maximum price they have specified. If the Spot Price moves higher than a customer’s maximum price, the customer’s EC2 instance will be shut down automatically. But the reverse is not true, if the Spot prices come down again, your EC2 instance will not be launched automatically, one has to do that manually.  In Spot and On demand instance, there is no commitment for the duration from the user side, however in reserved instances one has to stick to the time period that he has chosen.

26. Are the Reserved Instances available for Multi-AZ Deployments?

  1. Multi-AZ Deployments are only available for Cluster Compute instance types.
  2. Available for all instance types
  3. Only available for M3 instance types
  4. D. Not Available for Reserved Instances

Answer B.

Explanation: Reserved Instances is a pricing model, which is available for all instance types in EC2.

27. How to use the processor state control feature available on the  c4.8xlarge instance?

The processor state control consists of 2 states:

  • The C state – Sleep state varying from c0 to c6. C6 being the deepest sleep state for a processor
  • The P state – Performance state p0 being the highest and p15 being the lowest possible frequency.

Now, why the C state and P state? Processors have cores, and these cores need thermal headroom to boost their performance. Since all the cores are on the processor, the temperature should be kept at an optimal state so that all the cores can perform at their highest performance.

Now how will these states help in that? If a core is put into sleep state it will reduce the overall temperature of the processor and hence other cores can perform better. Now the same can be  synchronized with other cores, so that the processor can boost as many cores it can by timely putting other cores to sleep, and thus get an overall performance boost.

In conclusion, the C and P states can be customized in some EC2 instances, like the c4.8xlarge instance, and thus, you can customize the processor according to your workload.

28. What kind of network performance parameters can you expect when you launch instances in a cluster placement group?

The network performance depends on the instance type and network performance specification, if launched in a placement group you can expect up to

  • 10 Gbps in a single-flow,
  • 20 Gbps in multiflow i.e full duplex
  • Network traffic outside the placement group will be limited to 5 Gbps(full duplex).

29. To deploy a 4-node cluster of Hadoop in AWS, which instance type can be used?

First let’s understand what actually happens in a Hadoop cluster, the Hadoop cluster follows a master slave concept. The master machine processes all the data, slave machines store the data and act as data nodes. Since all the storage happens at the slave, a higher capacity hard disk would be recommended, and since the master does all the processing, a higher RAM and a much better CPU is required. Therefore, you can select the configuration of your machine depending on your workload. For example, In this case, c4.8xlarge will be preferred for the master machine, whereas for the slave machine, we can select the i2.large instance. If you don’t want to deal with configuring your instance and installing the Hadoop cluster manually, you can straight away launch an Amazon EMR (Elastic Map Reduce) instance, which automatically configures the servers for you. You dump your data to be processed in S3, and EMR picks it from there, processes it, and dumps it back into S3.

30. Where do you think an AMI fits when you are designing an architecture for a solution?

AMIs(Amazon Machine Images) are like templates of virtual machines and an instance is derived from an AMI. AWS offers pre-baked AMIs which you can choose while you are launching an instance, some AMIs are not free, therefore can be bought from the AWS Marketplace. You can also choose to create your own custom AMI, which will help you save space on AWS. For example, if you don’t need a software set for your installation, you can customize your AMI to do that. This makes it cost-efficient since you are removing unwanted things.

Check out our AWS Certification Training in Top Cities

IndiaOther Countries/Cities
HyderabadAtlanta
BangaloreCanada
ChennaiDubai
MumbaiLondon
PuneUK

31. How do you choose an Availability Zone?

Let’s understand this through an example. Consider a company with users in India and the US.

Let us see how we will choose the region for this use case :

regions-s3-aws-compressor - EdurekaSo, with reference to the above figure the regions to choose between are, Mumbai and North Virginia. Now let us first compare the pricing, you have hourly prices, which can be converted to your per month figure. Here North Virginia emerges as a winner. But, pricing cannot be the only parameter to consider. Performance should also be kept in mind hence, let’s look at latency as well. Latency basically is the time that a server takes to respond to your requests i.e the response time. North Virginia wins again!

So, in conclusion, North Virginia should be chosen for this use case.

32. Is one Elastic IP address enough for every instance that I have running?

Depends! Every instance comes with its own private and public address. The private address is associated exclusively with the instance and is returned to Amazon EC2 only when it is stopped or terminated. Similarly, the public address is associated solely with the instance until it is stopped or terminated. However, this can be replaced by the Elastic IP address, which stays with the instance as long as the user doesn’t manually detach it. But what if you are hosting multiple websites on your EC2 server, in that case you may require more than one Elastic IP address.

33. What are the best practices for Security in Amazon EC2?

There are several best practices to secure Amazon EC2. A few of them are given below:

  • Use AWS Identity and Access Management (IAM) to control access to your AWS resources.
  • Restrict access by only allowing trusted hosts or networks to access ports on your instance.
  • Review the rules in your security groups regularly, and ensure that you apply the principle of least
  • Privilege – only open up permissions that you require.
  • Disable password-based logins for instances launched from your AMI. Passwords can be found or cracked, and are a security risk.

34. How can you upgrade or downgrade a system with little to no downtime?

The following steps can be used to update or downgrade a system with near-zero downtime:

First, you will have to Launch the EC2 console, then secondly, select the AMI Operating System. The next step is creating an instance using the new instance type; you need to install the updates and go to set up apps. Then, check to determine if the cases are operational or not, and if everything is well, you will deploy the new instance and replace all the old ones. Now once everything is ready for installation, you can upgrade or downgrade the system with very little to no downtime.

35. What’s the Amazon EC2 root device volume? 

The image used to boot an EC2 instance is saved on the root device slice, which happens when an Amazon AMI launches a new EC2 case. This root device volume is supported by EBS or an instance store. In general, the lifetime of an EC2 instance does not affect the root device data stored on Amazon EBS.

36. Mention and explain the many types of Amazon EC2 instances.

The various instances available on Amazon EC2 General-purpose Instances: 

  1. They are used to compute a wide range of tasks and aid in allocating processor, memory, and networking resources.
  2. Instances optimized for computing: These are suitable for compute-intensive workloads. They can handle batch processing workloads, high-performance web servers, machine learning inference, and a wide range of other tasks.
  3. Memory-optimized: They process and provide tasks that manage massive datasets in memory.
  4. Computing speed: It accelerates the execution of floating-point number calculations, data pattern matching, and graphics processing. 
  5. Optimized Storage: They conduct operations on local storage that need sequential read and write access to big data sets.

37. What exactly do you mean by ‘changing’ in Amazon EC2?

Amazon EC2 now allows customers to move from the current ‘instance count-based constraints’ to the new ‘vCPU-based restrictions.’ As a result, when launching a demand-driven mix of instance types, usage is assessed in terms of the number of vCPUs.

38. Your application is running on an EC2 instance. When your instance’s CPU consumption reaches 80%, you must lower the load on it. What method do you employ to finish the task?

Setting up an autoscaling group to deploy new instances when an EC2 instance’s CPU consumption exceeds 80% and distributing traffic among instances via the deployment of an application load balancer and the designation of EC2 instances as target instances can do this.

39. How does one set up CloudWatch to recover an EC2 instance?

Here’s how you can set them up:

  1. Using Amazon CloudWatch, create an alarm.
  2. Navigate to the Define Alarm -> Actions tab of the Alarm.
  3. Choose the Option to Recover This Instance

recover ec2 instances 40. How do you recover/log in to an EC2 instance for which you have lost the key?

If you have lost your key, follow the procedures below to recover an EC2 instance:

Step 1.  Verify that the EC2Config service is operating.

Step 2. Detach the instance’s root volume.

Step 3.  Connect the volume to a temporary instance

Step 4. Change the configuration file

Step 5.  Restart the original instance.

 

Section 3: AWS interview questions for Amazon Storage

 

41. What exactly is Amazon S3?  

Explanation S3 stands for Simple Storage Service, and Amazon S3 is the most extensively used storehouse platform. S3 is an object storehouse service that can store and recoup any volume of data from any position. Despite its rigidity, it’s basically measureless as well as cost-effective because it’s an on-demand storehouse. Away from these advantages, it provides new situations of continuity and vacuity. Amazon S3 aids in data operation for cost reduction, access control, and compliance.

42. What Storage Classes are available in Amazon S3?

Explanation: The following Storage Classes are accessible using Amazon S3:

  1. Storage class Amazon S3 Glacier Instant Retrieval
  2. Amazon S3 Glacier Flexible Retrieval Storage Class (Formerly S3 Glacier)
  3. Glacier Deep Archive on Amazon S3 (S3 Glacier Deep Archive)
  4. Storage class S3 Outposts
  5. Amazon S3 Standard-Occasional Access (S3 Standard-IA)
  6. Amazon S3 One Zone-Only Occasional Access (S3 One Zone-IA)
  7. Amazon S3 Basic (S3 Standard)
  8. Amazon S3 Storage with Reduced Redundancy
  9. Intelligent-Tiering on Amazon S3 (S3 Intelligent-Tiering)

43. How do you auto-delete old snapshots? 

Explanation: Here’s how to delete outdated photos automatically:

  1. Take snapshots of the EBS volumes on Amazon S3 in accordance with process and best practices.
  2. To manage all of the snapshots automatically, use AWS Ops Automator.
  3. You may use this to generate, copy, and remove Amazon EBS snapshots.

44. Another awesome interview question and answer for experienced scenario-based. You need to configure an Amazon S3 bucket to serve static assets for your public-facing web application. Which method will ensure that all objects uploaded to the bucket are set to public read?

  1. Set permissions on the object for the public to read during upload.
  2. Configure the bucket policy to set all objects to public read.
  3. Use AWS Identity and Access Management roles to set the bucket for public reading.
  4. Amazon S3 objects default to public read, so no action is needed.

Answer B.

Explanation: Rather than making changes to every object, it’s better to set the policy for the whole bucket. IAM is used to give more granular permissions. Since this is a website, all objects would be public by default.

45. A customer wants to leverage Amazon Simple Storage Service (S3) and Amazon Glacier as part of their backup and archive infrastructure. The customer plans to use third-party software to support this integration. Which approach will limit the access of the third-party software to only the Amazon S3 bucket named “company-backup”?

  1. A custom bucket policy is limited to the Amazon S3 API in three Amazon Glacier archives, which are called “company backups.”
  2. A custom bucket policy limited to the Amazon S3 API in “company-backup.”
  3. A custom IAM user policy limited to the Amazon S3 API for the Amazon Glacier archive “company-backup”.
  4. A custom IAM user policy limited to the Amazon S3 API in “company-backup”.

Answer D.

Explanation: Taking queue from the previous questions, this use case involves more granular permissions, hence IAM would be used here.

45. Can S3 be used with EC2 instances, if yes, how?

Yes, it can be used for instances with root devices backed by local instance storage. By using Amazon S3, developers have access to the same highly scalable, reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its global network of websites. In order to execute systems in the Amazon EC2 environment, developers use the tools provided to load their Amazon Machine Images (AMIs) into Amazon S3 and move them between Amazon S3 and Amazon EC2.

Another use case could be for websites hosted on EC2 to load their static content from S3.

For a detailed discussion on S3, please refer to our S3 AWS blog.

46. A customer implemented AWS Storage Gateway with a gateway-cached volume at their main office. An event takes the link between the main and branch offices offline. Which methods will enable the branch office to access their data?

  1. Restore by implementing a lifecycle policy on the Amazon S3 bucket.
  2. Make an Amazon Glacier Restore API call to load the files into another Amazon S3 bucket within four to six hours.
  3. Launch a new AWS Storage Gateway instance AMI in Amazon EC2, and restore from a gateway snapshot.
  4. Create an Amazon EBS volume from a gateway snapshot and mount it to an Amazon EC2 instance.

Answer C.

Explanation: The fastest way to do it would be to launch a new storage gateway instance. Why? Since time is the key factor driving every business, troubleshooting this problem will take more time. Rather than restoring the storage gateway’s previous working state on a new instance, we can just restore the previous working state of the storage gateway on a new instance.

47. When you need to move data over long distances using the internet, for instance, across countries or continents, to your Amazon S3 bucket, which method or service will you use?

  1. Amazon Glacier
  2. Amazon CloudFront
  3. Amazon Transfer Acceleration
  4. Amazon Snowball

Answer C.

Explanation: You would not use Snowball because, for now, the Snowball service does not support cross-region data transfer, and since we are transferring across countries, Snowball cannot be used. Transfer Acceleration shall be the right choice here as it throttles your data transfer with the use of optimized network paths and Amazon’s content delivery network up to 300% compared to normal data transfer speed.

48. How can you speed up data transfer in Snowball?

The data transfer can be increased in the following way:

  • By performing multiple copy operations at one time, i.e., if the workstation is powerful enough, you can initiate multiple cp commands, each from different terminals, on the same Snowball device.
  • Copying from multiple workstations to the same snowball.
  • Transferring large files or creating a batch of small files will reduce the encryption overhead.
  • Eliminating unnecessary hops, i.e., making a setup where the source machine(s) and the snowball are the only machines active on the switch being used, can hugely improve performance.

49. What’s the distinction between EBS and Instance Store?

EBS is a type of persistent storage that allows data to be recovered later. When you save data to EBS, it remains long after the EC2 instance has been terminated. An Instance Store, on the other hand, is temporary storage that is physically tied to a host system. You cannot remove one instance and attach it to another using an Instance Store. Data in an Instance Store, unlike EBS, is lost if any instance is stopped or terminated.

50. How can you use EBS to automate EC2 backup?

To automate EC2 backups using EBS, perform the following steps:

Step 1. Get a list of instances and connect to AWS through API to get a list of Amazon EBS volumes that are associated to the instance locally.

Step 2. List each volume’s snapshots and assign a retention time to each snapshot. Afterward, create a snapshot of each volume.

Step 3. Remove any snapshots that are older than the retention term.

 

Section 4: AWS Interview Questions for AWS VPC

 

51. What Is Amazon Virtual Private Cloud (VPC) and How Does It Work?

A VPC is the most efficient way to connect to your cloud services from within your data center. When you link your data center to the VPC that contains your instances, each instance is allocated a private IP address that can be accessed from your data center. As a result, you may use public cloud services as if they were on your own private network.

52. If you want to launch Amazon Elastic Compute Cloud (EC2) instances and assign each instance a predetermined private IP address, you should:

  1. Launch the instance from a private Amazon Machine Image (AMI).
  2. Assign a group of sequential Elastic IP addresses to the instances.
  3. Launch the instances in the Amazon Virtual Private Cloud (VPC).
  4. Launch the instances in a Placement Group.

Answer C.

Explanation: The best way of connecting to your cloud resources (for example, ec2 instances) from your own data center (for example, private cloud) is a VPC. Once you connect your data center to the VPC in which your instances are present, each instance is assigned a private IP address that can be accessed from your data center. Hence, you can access your public cloud resources as if they were on your own network.

Master the Cloud with Confidence – Enroll in AWS SysOps Training Today!

53. How do you connect multiple sites to a VPC?

If you have numerous VPN connections, you may use the AWS VPN CloudHub to encrypt communication across locations. Here’s an illustration of how to link different sites to a VPC:

connecting multiple sites to a vpc

54. What are some of the security products and features offered in VPC?

Here are some security products and features:

Security groups serve as firewalls for EC2 instances, allowing you to regulate inbound and outgoing traffic at the instance level.

Network access control lists – It operates as a subnet-level firewall, managing inbound and outgoing traffic.

Flow logs – capture inbound and outgoing traffic from your VPC’s network interfaces.

55. How many Subnets are allowed in a VPC?

Each Amazon Virtual Private Cloud may support up to 200 Subnets (VPC).

56. Can I connect my corporate data center to the Amazon Cloud?

Yes, you can do this by establishing a VPN(Virtual Private Network) connection between your company’s network and your VPC (Virtual Private Cloud), this will allow you to interact with your EC2 instances as if they were within your existing network.

57. Is it possible to change an EC2’s private IP address while it is running or stopped in a VPC?

The primary private IP address is attached to the instance throughout its lifetime and cannot be changed; however, secondary private addresses can be unassigned, assigned, or moved between interfaces or instances at any point.

58. Why do you make subnets?

  1. Because there is a shortage of networks
  2. To efficiently utilize networks that have a large no. of hosts.
  3. Because there is a shortage of hosts.
  4. To efficiently utilize networks that have a small no. of hosts.

Answer B.

Explanation: If a network has a large number of hosts, managing all these hosts can be tedious. Therefore, we divide this network into subnets (sub-networks) so that managing these hosts becomes simpler.

59. Which of the following is true?

  1. You can attach multiple route tables to a subnet
  2. You can attach multiple subnets to a route table
  3. Both A and B
  4. None of these.

Answer B.

Explanation: Route Tables are used to route network packets. Having multiple route tables in a subnet will lead to confusion as to where the packet has to go. Therefore, there is only one route table in a subnet. Since a route table can have any number of records or information, attaching multiple subnets to a route table is possible.

60. In CloudFront, what happens when content is NOT present at an Edge location and a request is made to it?

  1. An Error “404 not found” is returned.
  2. CloudFront delivers the content directly from the origin server and stores it in the cache of the edge location.
  3. The request is kept on hold till content is delivered to the edge location
  4. The request is routed to the next closest edge location

Answer B. 

Explanation: CloudFront is a content delivery system that caches data to the nearest edge location from the user to reduce latency. If data is not present at an edge location, it may be transferred from the original server the first time, but the next time, it will be served from the cached edge.

61. If I’m using Amazon CloudFront, can I use Direct Connect to transfer objects from my own data center?

Yes. Amazon CloudFront supports custom origins, including origins from outside of AWS. With AWS Direct Connect, you will be charged with the respective data transfer rates.

62. If my AWS Direct Connect fails, will I lose my connectivity?

If a backup AWS Direct connect has been configured, in the event of a failure, it will switch over to the second one. Enabling bidirectional forwarding detection (BFD) when configuring your connections is recommended to ensure faster detection and failover. On the other hand, if you have configured a backup IPsec VPN connection instead, all VPC traffic will failover to the backup VPN connection automatically. Traffic to/from public resources such as Amazon S3 will be routed over the Internet. If you do not have a backup AWS Direct Connect link or an IPsec VPN link, then Amazon VPC traffic will be dropped in the event of a failure.

 

Section 5: AWS Interview Questions and Answers for Amazon Database

 

63. If I launch a standby RDS instance, will it be in the same Availability Zone as my primary?

  1. Only for Oracle RDS types
  2. Yes
  3. Only if it is configured at launch
  4. No

Answer D.

Explanation: No, since the purpose of having a standby instance is to avoid an infrastructure failure (if it happens), therefore the standby instance is stored in a different availability zone, which is a physically different independent infrastructure.

64. When would I prefer Provisioned IOPS over Standard RDS storage?

  1. If you have batch-oriented workloads
  2. If you use production online transaction processing (OLTP) workloads.
  3. If you have workloads that are not sensitive to consistent performance
  4. All of the above

Answer A.

Explanation: Provisioned IOPS deliver high IO rates but are also expensive. Batch processing workloads do not require manual intervention and enable full system utilization; therefore, a provisioned IOPS will be preferred for batch-oriented workloads.

65. Given that the RDS instance replica is not promoted as the master instance, how would you handle a situation in which the relational database engine routinely collapses as traffic to your RDS instances increases?

For managing high amounts of traffic, as well as creating manual or automatic snapshots to restore data if the RDS instance fails, a bigger RDS instance type is necessary.

66. Which scaling method would you recommend for RDS, and why?

There are two forms of scaling: vertical scaling and horizontal scaling. Vertical scaling allows you to scale up your master database vertically with the click of a button. A database can only be scaled vertically, and the RDS may be resized in 18 different ways. Horizontal scaling, on the other hand, is beneficial for copies. These are read-only replicas that can only be performed with Amazon Aurora.

66. How is Amazon RDS, DynamoDB and Redshift different?

  • Amazon RDS is a database management service for relational databases,  it manages patching, upgrading, backing up of data etc. of databases for you without your intervention. RDS  is a Db management service for structured data only.
  • DynamoDB, on the other hand, is a NoSQL database service, NoSQL deals with unstructured data.
  • Redshift is an entirely different service. It is a data warehouse product used in data analysis.

67. If I am running my DB Instance as a Multi-AZ deployment, can I use the standby DB Instance for read or write operations along with the primary DB instance?

  1. Yes
  2. Only with MySQL-based RDS
  3. Only for Oracle RDS instances
  4. No

Answer D.

Explanation: No, the Standby DB instance cannot be used with the primary DB instance in parallel. The former is solely used for standby purposes and cannot be used unless the primary instance goes down.

68. Your company’s branch offices are all over the world. They use software with a multi-regional deployment on AWS and MySQL 5.6 for data persistence.

The task is to run an hourly batch process and read data from every region to compute cross-regional reports, which will be distributed to all the branches. This should be done in the shortest time possible. How will you build the DB architecture to meet the requirements?

  1. For each regional deployment, use RDS MySQL with a master in the region and a read replica in the HQ region.
  2. For each regional deployment, use MySQL on EC2 with a master in the region and send hourly EBS snapshots to the HQ region
  3. For each regional deployment, use RDS MySQL with a master in the region and send hourly RDS snapshots to the HQ region.
  4. For each regional deployment, use MySQL on EC2 with a master in the region and use S3 to copy data files hourly to the HQ region

Answer A.

Explanation: For this, we will take an RDS instance as a master because it will manage our database for us, and since we have to read from every region, we’ll put a read replica of this instance in every area where the data has to be read from. Option C is not correct since putting a read replica would be more efficient than putting a snapshot, a read replica can be promoted if needed to an independent DB instance, but with a Db snapshot it becomes mandatory to launch a separate DB Instance.

69. Can I run more than one DB instance for Amazon RDS for free?

Yes. You can run more than one Single-AZ Micro database instance, that too for free! However, any use exceeding 750 instance hours across all Amazon RDS Single-AZ Micro DB instances across all eligible database engines and regions will be billed at standard Amazon RDS prices. For example: if you run two Single-AZ Micro DB instances for 400 hours each in a single month, you will accumulate 800 instance hours of usage, of which 750 hours will be free. You will be billed for the remaining 50 hours at the standard Amazon RDS price.

For a detailed discussion on this topic, please refer to our RDS AWS blog.

70. Which AWS services will you use to collect and process e-commerce data for near real-time analysis?

  1. Amazon ElastiCache
  2. Amazon DynamoDB
  3. Amazon Redshift
  4. Amazon Elastic MapReduce

Answer B, C.

Explanation: DynamoDB is a fully managed NoSQL database service. Therefore, DynamoDB can be fed any type of unstructured data, including data from e-commerce websites. Later, an analysis can be done on them using Amazon Redshift. We are not using Elastic MapReduce since near real-time analyses are needed.

71. Can I retrieve only a specific element of the data if I have nested JSON data in DynamoDB?

Yes. When using the GetItem, BatchGetItem, Query, or Scan APIs, you can define a Projection Expression to determine which attributes should be retrieved from the table. Those attributes can include scalars, sets, or elements of a JSON document.

72. A company is deploying a new two-tier web application in AWS. The company has limited staff and requires high availability, and the application requires complex queries and table joins. Which configuration provides the solution for the company’s requirements?

  1. MySQL was Installed on two Amazon EC2 Instances in a single Availability Zone.
  2. Amazon RDS for MySQL with Multi-AZ
  3. Amazon ElastiCache
  4. Amazon DynamoDB

Answer D.

Explanation: DynamoDB can scale more than RDS or any other relational database service, so it would be the apt choice.

73. What happens to my backups and DB Snapshots if I delete my DB Instance?

When you delete a DB instance, you have the option of creating a final DB snapshot. If you do that, you can restore your database from that snapshot. RDS retains this user-created DB snapshot along with all other manually created DB snapshots after the instance is deleted. Also, automated backups are deleted, and only manually created DB Snapshots are retained.

74. Which of the following use cases are suitable for Amazon DynamoDB? Choose two answers

  1. Managing web sessions.
  2. Storing JSON documents.
  3. Storing metadata for Amazon S3 objects.
  4. Running relational joins and complex updates.

Answer C, D.

Explanation: If all your JSON data have the same fields, e.g., [id,name,age], then it would be better to store it in a relational database. The metadata, on the other hand, is unstructured. Also, running relational joins or complex updates would work on DynamoDB as well.

75. How can I load my data to Amazon Redshift from different data sources, such as Amazon RDS, Amazon DynamoDB, and Amazon EC2?

You can load the data in the following two ways:

  • You can use the COPY command to load data in parallel directly to Amazon Redshift from Amazon EMR, Amazon DynamoDB, or any SSH-enabled host.
  • AWS Data Pipeline provides a high-performance, reliable, fault-tolerant solution to load data from a variety of AWS data sources. You can use AWS Data Pipeline to specify the data source and desired data transformations and then execute a pre-written import script to load your data into Amazon Redshift.

76. What is an Amazon RDS maintenance window? Will your database instance be available during maintenance?

The RDS maintenance window allows you to plan DB instance updates, database engine version upgrades, and software patching. Only upgrades for security and durability are scheduled automatically. The maintenance window is set to 30 minutes by default, and the DB instance will remain active throughout these events but with somewhat reduced performance.

77. Your application has to retrieve data from your user’s mobile every 5 minutes, and the data is stored in DynamoDB. Later, every day at a particular time, the data is extracted into S3 on a per-user basis, and then your application is later used to visualize the data for the user. You are asked to optimize the architecture of the backend system to lower cost. What would you recommend?

  1. Create a new Amazon DynamoDB (able to be used each day) and drop the one for the previous day after its data is on Amazon S3.
  2. Introduce an Amazon SQS queue to buffer writes to the Amazon DynamoDB table and reduce provisioned write throughput.
  3. Introduce Amazon Elasticache to cache reads from the Amazon DynamoDB table and reduce provisioned read throughput.
  4. Write data directly into an Amazon Redshift cluster, replacing both Amazon DynamoDB and Amazon S3.

Answer C.

Explanation: Since our work requires the data to be extracted and analyzed, to optimize this process, a person would use provisioned IO, but since it is expensive, using an ElastiCache memory instead to cache the results in the memory can reduce the provisioned read throughput and hence reduce cost without affecting the performance.

78. You are running a website on EC2 instances deployed across multiple Availability Zones with a Multi-AZ RDS MySQL Extra Large DB Instance. The site performs a high number of small reads and writes per second and relies on an eventual consistency model. After comprehensive tests, you discover that there is read contention on RDS MySQL. Which are the best approaches to meet these requirements? (Choose two answers)

  1. Deploy ElastiCache in-memory cache running in each availability zone
  2. Implement sharding to distribute load to multiple RDS MySQL instances
  3. Increase the RDS MySQL Instance size and Implement provisioned IOPS
  4. Add an RDS MySQL read replica in each availability zone

Answer A, C.

Explanation: Since it does a lot of read writes, provisioned IO may become expensive. But we need high performance as well, so the data can be cached using ElastiCache, which can be used for frequently reading the data. As for RDS, since read contention is happening, the instance size should be increased, and provisioned IO should be introduced to increase the performance.

 

79. A startup is running a pilot deployment of around 100 sensors to measure street noise and air quality in urban areas for 3 months. It was noted that every month, around 4GB of sensor data is generated. The company uses a load-balanced auto-scaled layer of EC2 instances and an RDS database with 500 GB standard storage. The pilot was a success, and now they want to deploy at least 100K sensors, which need to be supported by the backend. You need to store the data for at least 2 years to analyze it. Which setup of the following would you prefer?

  1. Add an SQS queue to the ingestion layer to buffer writes to the RDS instance
  2. Ingest data into a DynamoDB table and move old data to a Redshift cluster
  3. Replace the RDS instance with a 6 node Redshift cluster with 96TB of storage
  4. Keep the current architecture but upgrade RDS storage to 3TB and 10K provisioned IOPS

Answer C.
Explanation: A Redshift cluster would be preferred because it is easy to scale. Also, the work would be done in parallel through the nodes, making it perfect for a bigger workload like our use case. Since each month 4 GB of data is generated, in 2 years, it should be around 96 GB. And since the servers will be increased to 100K in number, 96 GB will approximately become 96TB. Hence, option C is the right answer.

 

Section 6: AWS Interview Questions and Answers for AWS Auto Scaling, AWS Load Balancer

 

80. Suppose you have an application that renders images and also does some general computing. Which of the following services will best suit your needs?

  1. Classic Load Balancer
  2. Application Load Balancer
  3. Both of them
  4. None of these

Answer B.

Explanation: You will choose an application load balancer since it supports path-based routing, which means it can make decisions based on the URL. Therefore, if your task needs image rendering, it will route it to a different instance, and for general computing, it will route it to a different instance.

81. What is the difference between Scalability and Elasticity?

Scalability is the ability of a system to increase its hardware resources to handle an increase in demand. This can be done by increasing the hardware specifications or the processing nodes.

Elasticity is the ability of a system to handle an increase in workload by adding additional hardware resources when the demand increases(the same as scaling) but also rolling back the scaled resources when they are no longer needed. This is particularly helpful in Cloud environments, where a pay-per-use model is followed.

82. How can an existing instance be added to a new Auto Scaling group?

To add an existing instance to a new Auto Scaling group, follow these steps:

Step1. Launch the EC2 console.

Step2. Select your instance from the list of Instances.

Step3. Navigate to Actions -> Instance Settings -> Join the Auto Scaling Group

Step4. Choose a new Auto Scaling group.

Step5. Join this group to the Instance.

Step6. If necessary, modify the instance.

Step7. Once completed, the instance may be successfully added to a new Auto Scaling group.

83. How will you change the instance type for instances that are running in your application tier and are using Auto Scaling? Where will you change it from the following areas?

  1.   Auto Scaling policy configuration
  2.   Auto Scaling group
  3.   Auto Scaling tags configuration
  4.   Auto Scaling launch configuration

Answer D.

Explanation: The auto scaling tags configuration is used to attach metadata to your instances. To change the instance type, you have to use the auto scaling launch configuration.

84. You have a content management system running on an Amazon EC2 instance that is approaching 100% CPU utilization. Which option will reduce load on the Amazon EC2 instance?

  1.   Create a load balancer and register the Amazon EC2 instance with it
  2.   Create a CloudFront distribution and configure the Amazon EC2 instance as the origin
  3.   Create an Auto Scaling group from the instance using the CreateAutoScalingGroup action
  4.   Create a launch configuration from the instance using the CreateLaunchConfigurationAction

Answer A.

Explanation: Creating an autoscaling group alone will not solve the issue until you attach a load balancer to it. Once you attach a load balancer to an autoscaling group, it will efficiently distribute the load among all the instances. Option B—CloudFront is a CDN; it is a data transfer tool. Therefore, it will not help reduce the load on the EC2 instance. Similarly, the other option—Launch configuration—is a template for configuration that has no connection with reducing loads.

85. When should I use a Classic Load Balancer and when should I use an Application load balancer?

A Classic Load Balancer is ideal for simple traffic load balancing across multiple EC2 instances, while an Application Load Balancer is ideal for microservices or container-based architectures where traffic needs to be routed to multiple services or load balanced across multiple ports on the same EC2 instance.

For a detailed discussion on Auto Scaling and Load Balancer, please refer to our EC2 AWS blog.

86. What does Connection draining do?

  1.  Terminates instances that are not in use.
  2.  Re-routes traffic from instances that are to be updated or failed a health check.
  3.  Re-routes traffic from instances that have more workload to instances that have less workload.
  4.  Drains all the connections from an instance with one click.

Answer B.

Explanation: Connection draining is a service under ELB that constantly monitors the health of the instances. If any instance fails a health check or needs to be patched with a software update, it pulls all the traffic from that instance and reroutes it to other instances.

87. When an unhealthy instance is terminated and replaced with a new one, which of the following services does that?

  1.  Sticky Sessions
  2.  Fault Tolerance
  3.  Connection Draining
  4.  Monitoring

Answer B.

Explanation: When ELB detects that an instance is unhealthy, it starts routing incoming traffic to other healthy instances in the region. If all the instances in a region become unhealthy, and if you have instances in some other availability zone/region, your traffic is directed to them. Once your instances become healthy again, they are rerouted back to the original instances.

88. What are lifecycle hooks used for in AutoScaling?

  1.   They are used to do health checks on instances.
  2.  They are used to put an additional wait time to a scale-in or scale-out event.
  3.  They are used to shorten the wait time for a scale in or scale out event
  4.  None of these

Answer B.

Explanation: Lifecycle hooks are used for putting wait time before any lifecycle action, i.e., launching or terminating an instance. The purpose of this wait time can be anything from extracting log files before terminating an instance to installing the necessary software in an instance before launching it.

89. A user has set up an Auto Scaling group. Due to some issue, the group has failed to launch a single instance for more than 24 hours. What will happen to Auto Scaling in this condition?

  1. Auto Scaling will keep trying to launch the instance for 72 hours
  2. Auto Scaling will suspend the scaling process
  3. Auto Scaling will start an instance in a separate region
  4. The Auto Scaling group will be terminated automatically

Answer B.

Explanation: Auto Scaling allows you to suspend and then resume one or more of the Auto Scaling processes in your Auto Scaling group. This can be very useful when you want to investigate a configuration problem or other issue with your web application and then make changes to it without triggering the Auto Scaling process.

 

 

Section 7: AWS Interview Questions for CloudTrail, Route 53

 

90. What is Cloudtrail, and how does it interact with Route 53?

CloudTrail is a service that logs every request made to the Amazon Route 53 API by an AWS account, including those made by IAM users. It stores these requests’ log files in an Amazon S3 bucket and collects data on all requests. CloudTrail log files contain information that may be used to discover which requests were submitted to Amazon Route 53, the IP address from which the request was sent, who issued the request, when it was sent, and more.

91. How does AWS configuration interact with AWS CloudTrail?

AWS CloudTrail logs user API activity on your account and provides you with access to the data. CloudTrail offers detailed information on API activities, such as the caller’s identity, the time of the call, request arguments, and response elements. AWS Config, on the other hand, saves point-in-time configuration parameters for your AWS resources as Configuration Items (CIs).

A CI may be used to determine what your AWS resource looks like at any given time. Using CloudTrail, on the other hand, you can instantly determine who made an API request to alter the resource. Cloud Trail may also be used to determine if a security group was wrongly setup.

92. You have an EC2 Security Group with several running EC2 instances. You changed the Security Group rules to allow inbound traffic on a new port and protocol, and then launched several new instances in the same Security Group. The new rules apply:

  1. Immediately to all instances in the security group.
  2. Immediately to the new instances only.
  3. Immediately to the new instances, but old instances must be stopped and restarted before the new rules apply.
  4. To all instances, but it may take several minutes for old instances to see the changes.

Answer A.

Explanation: Any rule specified in an EC2 Security Group applies immediately to all instances, regardless of whether they are launched before or after adding a rule.

93. To create a mirror image of your environment in another region for disaster recovery, which of the following AWS resources do not need to be recreated in the second region? ( Choose 2 answers )

  1. Route 53 Record Sets
  2. Elastic IP Addresses (EIP)
  3. EC2 Key Pairs
  4. Launch configurations
  5. Security Groups

Answer A.

Explanation: Route 53 record sets are common assets. Therefore, there is no need to replicate them since Route 53 is valid across regions

94. A customer wants to capture all client connection information from his load balancer at an interval of 5 minutes, which of the following options should he choose for his application?

  1. Enable AWS CloudTrail for the loadbalancer.
  2. Enable access logs on the load balancer.
  3. Install the Amazon CloudWatch Logs agent on the load balancer.
  4. Enable Amazon CloudWatch metrics on the load balancer.

Answer A.

Explanation: AWS CloudTrail provides inexpensive logging information for load balancers and other AWS resources. This information can be used for analyses and other administrative work, making it perfect for this use case.

95. A customer wants to track access to their Amazon Simple Storage Service (S3) buckets and use this information for internal security and access audits. Which of the following will meet the Customer’s requirements?

  1. Enable AWS CloudTrail to audit all Amazon S3 bucket access.
  2. Enable server access logging for all required Amazon S3 buckets.
  3. Enable the Requester Pays option to track access via AWS Billing
  4. Enable Amazon S3 event notifications for Put and Post.

Answer A.

Explanation: AWS CloudTrail has been designed to log and track API calls. This service is also available for storage and should, therefore, be used in this use case.

96. Which of the following are true regarding AWS CloudTrail? (Choose 2 answers)

  1. CloudTrail is enabled globally
  2. CloudTrail is enabled on a per-region and service basis
  3. Logs can be delivered to a single Amazon S3 bucket for aggregation.
  4. CloudTrail is enabled for all available services within a region.

Answer B, C.

Explanation: Cloudtrail is not enabled for all services and is not available for all regions. Therefore, option B is correct. The logs can be delivered to your S3 bucket, so option C is also correct.

97. What happens if CloudTrail is turned on for my account but my Amazon S3 bucket is not configured with the correct policy?

CloudTrail files are delivered according to S3 bucket policies. If the bucket is not configured or is misconfigured, CloudTrail might not be able to deliver the log files.

98. How do I transfer my existing domain name registration to Amazon Route 53 without disrupting my existing web traffic?

You will need to get a list of the DNS record data for your domain name first. It is generally available in the form of a “zone file” that you can get from your existing DNS provider. Once you receive the DNS record data, you can use Route 53’s Management Console or simple web-services interface to create a hosted zone that will store your DNS records for your domain name and follow its transfer process. It also includes steps such as updating the nameservers for your domain name to the ones associated with your hosted zone. To complete the process, you have to contact the registrar with whom you registered your domain name and follow the transfer process. As soon as your registrar propagates the new name server delegations, your DNS queries will start to get answered.

99. What services are available for implementing a centralized logging solution?

The most important services you may utilize are Amazon CloudWatch Logs, which you can store in Amazon S3 and then display using Amazon Elastic Search. To transfer data from Amazon S3 to Amazon ElasticSearch, you can utilise Amazon Kinesis Firehose.

centralised loggin system

 

Section 8: AWS Interview Questions for AWS SQS, AWS SNS, AWS SES, AWS ElasticBeanstalk

 

100. WHAT EXACTLY ARE SNS AND SQS?

Ans: Amazon Simple Notification Service (SNS) is a web service that manages user notifications sent from any cloud platform. From any cloud platform, manage and distribute messages or notifications to users and consumers.

Amazon Simple Queue Service (SQS) administers the queue service, which allows users to move data whether it is running or active.

difference betweem sns and sqs101. Which of the following services would you not use to deploy an app?

  1. Elastic Beanstalk
  2. Lambda
  3. Opsworks
  4. CloudFormation

Answer B.

Explanation: Lambda is used to run server-less applications. It can be used to deploy functions triggered by events. When we say serverless, we mean without you worrying about the computing resources running in the background. It is not designed to create applications that are publicly accessed.

102. What distinguishes AWS CludFormation from AWS Elastic Beanstalk?

The following are some distinctions between AWS CloudFormation and AWS Elastic Beanstalk:

AWS CloudFormation assists you in provisioning and describing all infrastructure resources in your cloud environment. AWS Elastic Beanstalk, on the other hand, provides an environment that makes it simple to install and execute cloud applications.

AWS CloudFormation meets the infrastructure requirements of a wide range of applications, including legacy and corporate applications. AWS Elastic Beanstalk, on the other hand, is integrated with developer tools to assist you in managing your applications’ lifespans.

103. How does Elastic Beanstalk apply updates?

  1. By having a duplicate ready with updates before swapping.
  2. By updating on the instance while it is running
  3. By taking the instance down in the maintenance window
  4. Updates should be installed manually

Answer A.

Explanation: Elastic Beanstalk prepares a duplicate copy of the instance before updating the original instance and routes your traffic to the duplicate instance so that, in case your updated application fails, it will switch back to the original instance, and the users who are using your application will not experience downtime.

104. What happens if my application in Beanstalk stops responding to requests?

AWS Beanstalk apps provide a built-in method for preventing infrastructure problems. If an Amazon EC2 instance dies for whatever reason, Beanstalk will instantly start a new instance using Auto Scaling. Beanstalk can detect if your application is not responding to the custom link.

105. How is AWS Elastic Beanstalk different than AWS OpsWorks?

AWS Elastic Beanstalk is an application management platform, while OpsWorks is a configuration management platform. BeanStalk is an easy-to-use service that is used for deploying and scaling web applications developed with Java, .Net, PHP, Node.js, Python, Ruby, Go, and Docker. Customers upload their code and Elastic Beanstalk automatically handles the deployment. The application will be ready to use without any infrastructure or resource configuration.

In contrast, AWS Opsworks is an integrated configuration management platform for IT administrators or DevOps engineers who want a high degree of customization and control over operations.

106. What happens if my application stops responding to requests in Beanstalk?

AWS Beanstalk applications have a system in place to avoid failures in the underlying infrastructure. If an Amazon EC2 instance fails for any reason, Beanstalk will use Auto Scaling to launch a new instance automatically. Beanstalk can also detect if your application is not responding on the custom link even though the infrastructure appears healthy. If this happens, it will be logged as an environmental event(e.g., a bad version was deployed) so you can take appropriate action.

For a detailed discussion on this topic, please refer to the Lambda AWS blog.

 

Section 9: AWS Interview Questions and Answers for AWS OpsWorks, AWS KMS

 

107. How is AWS OpsWorks different than AWS CloudFormation?

OpsWorks and CloudFormation both support application modeling, deployment, configuration, management, and related activities. They also support a wide variety of architectural patterns, from simple web applications to highly complex applications. However, AWS OpsWorks and AWS CloudFormation differ in abstraction level and areas of focus.

AWS CloudFormation is a building block service that enables customers to manage almost any AWS resource via JSON-based domain-specific language. It provides foundational capabilities for the full breadth of AWS without prescribing a particular development and operations model. Customers define templates and use them to provision and manage AWS resources, operating systems, and application code.

In contrast, AWS OpsWorks is a higher-level service that focuses on providing highly productive and reliable DevOps experiences for IT administrators and ops-minded developers. To do this, AWS OpsWorks employs a configuration management model based on concepts such as stacks and layers and provides integrated experiences for key activities like deployment, monitoring, auto-scaling, and automation. Compared to AWS CloudFormation, AWS OpsWorks supports a narrower range of application-oriented AWS resource types, including Amazon EC2 instances, Amazon EBS volumes, Elastic IPs, and Amazon CloudWatch metrics.

108. A firm seeking to migrate to the AWS Cloud wants to use its existing Chef recipes for infrastructure configuration management. Which AWS service would be best for this need?

  1. AWS Elastic Load Balancer
  2. AWS Elastic Beanstalk
  3. AWS OpsWorks
  4. AWS Inspector

Answer C.

Explanation: AWS OpsWorks is a configuration management solution that allows you to use Puppet or Chef to set up and run applications in a cloud organization. AWS OpsWorks Stacks and AWS OpsWorks for Chef Automate enable you to leverage Chef cookbooks and solutions for configuration management. In contrast, AWS OpsWorks for Puppet Enterprise allows you to set up an AWS Puppet Enterprise master server. Puppet provides a suite of tools for enforcing desirable infrastructure states and automating on-demand operations.

Learn more about AWS Migration here!

109. I created a key in Oregon region to encrypt my data in North Virginia region for security purposes. I added two users to the key and an external AWS account. I wanted to encrypt an object in S3, so when I tried, the key that I just created was not listed.  What could be the reason?  

  1. External aws accounts are not supported.
  2. AWS S3 cannot be integrated KMS.
  3. The Key should be in the same region.
  4. New keys take some time to reflect in the list.

Answer C.

Explanation: The key created and the data to be encrypted should be in the same region. Hence, the approach used here to secure the data is incorrect.

110. A company’s demand in AWS is for a mound-grounded paradigm for its coffers. Different heaps are needed for the Development and product surroundings. Which of the following styles may be utilized to meet this demand?  

  1. A. EC2  markers to define different mound layers for your coffers.  
  2. In DynamoD, define the metadata for the various levels.
  3. Define the various tiers for your operation using AWS OpsWorks.
  4. Define the different levels for your business using AWS Config.

Answer C.  

Explanation: The OpsWorks service can meet the need. This need is supported by the AWS attestation listed below. AWS OpsWorks Stacks allows you to manage AWS and on-demand apps and waiters. You may represent your operation as a mound with several layers, similar to cargo balancing, database, and operation garçon, using OpsWorks Stacks. In each league, you may install and configure Amazon EC2 cases or link fresh coffers, similar to Amazon RDS databases.

111.  A company needs to monitor the read-and-write IOPS for its AWS MySQL RDS instance and send real-time alerts to its operations team. Which AWS services can accomplish this?

  1. Amazon Simple Email Service
  2. Amazon CloudWatch
  3. Amazon Simple Queue Service
  4. Amazon Route 53

Answer B.

Explanation: Amazon CloudWatch is a cloud monitoring tool and hence this is the right service for the mentioned use case. The other options listed here are used for other purposes for example route 53 is used for DNS services, therefore CloudWatch will be the apt choice.

112. What happens when one of the resources in a stack cannot be created successfully in AWS OpsWorks?

When an event like this occurs, the “automatic rollback on error” feature is enabled, which causes all the AWS resources that were created successfully till the point where the error occurred to be deleted. This is helpful since it does not leave behind any erroneous data, and it ensures that stacks are either created fully or not created at all. It is useful in events where you may accidentally exceed your limit of the no. of Elastic IP addresses, or maybe you may not have access to an EC2 AMI that you are trying to run, etc.

113. What automation tools can you use to spin up servers?

Any of the following tools can be used:

  • Roll your own scripts and use the AWS API tools. Such scripts could be written in bash, Perl, or another language of your choice.
  • Use a configuration management and provisioning tool like Puppet or its successor, Opscode Chef.  You can also use a tool like Scalr.
  • Use a managed solution such as RightScale.

We at Edureka are here to help you with every step on your journey to becoming an AWS Solution Architect; therefore, in addition to the AWS Interview Questions and answers, we have created a curriculum that covers exactly what you need to crack the Solution Architect Exam! 

I hope you benefitted from this blog. The topics you learned in this blog are the most sought-after skill sets recruiters look for in an AWS Solution Architect Professional. I have tried touching up on AWS interview questions and answers for freshers and people with 3-5 years of experience. However, for a more detailed study on AWS, you can refer our AWS Tutorial. Unlock your potential as an AWS Developer by earning your AWS Developer Certification. Take the next step in your cloud computing journey and showcase your expertise in designing,

Got a question for us? Please mention it in the comments section, and we will reply.6

Upcoming Batches For AWS Certification Training
Course NameDateDetails
AWS Certification Training

Class Starts on 16th November,2024

16th November

SAT&SUN (Weekend Batch)
View Details
AWS Certification Training

Class Starts on 18th November,2024

18th November

MON-FRI (Weekday Batch)
View Details
AWS Certification Training

Class Starts on 30th November,2024

30th November

SAT&SUN (Weekend Batch)
View Details
Comments
18 Comments
  • Please correct the answer related to Dynamo DB using for complex query and Joining the table.Dynamo DB will be using where the requirement for millisecond latency ,the Schema is dynamic in nature ,handling/storing the json data ,used for storing the web session data. Dynamo DB will not be practical good to use transnational based data and would be highly discourage to use it.

  • The answer for this question is wrong!!!

    44. You are running a website on EC2 instances deployed across multiple Availability Zones with a Multi-AZ RDS MySQL Extra Large DB Instance. The site performs a high number of small reads and writes per second and relies on an eventual consistency model. After comprehensive tests you discover that there is read contention on RDS MySQL. Which are the best approaches to meet these requirements? (Choose 2 answers)
    a. Deploy ElastiCache in-memory cache running in each availability zone
    b. Implement sharding to distribute load to multiple RDS MySQL instances
    c. Increase the RDS MySQL Instance size and Implement provisioned IOPS
    d. Add an RDS MySQL read replica in each availability zone

    CORRECT ANSWERS ARE A & D

Join the discussion

Browse Categories

webinar REGISTER FOR FREE WEBINAR
REGISTER NOW
webinar_success Thank you for registering Join Edureka Meetup community for 100+ Free Webinars each month JOIN MEETUP GROUP

Subscribe to our Newsletter, and get personalized recommendations.

image not found!
image not found!

Top 110+ AWS Interview Questions and Answers for 2024

edureka.co