Practice Free Professional Cloud Database Engineer Exam Online Questions
You have a Cloud SQL instance (DB-1) with two cross-region read replicas (DB-2 and DB-3). During a business continuity test, the primary instance (DB-1) was taken offline and a replica (DB-2) was promoted. The test has concluded and you want to return to the pre-test configuration.
What should you do?
- A . Bring DB-1 back online.
- B . Delete DB-1, and re-create DB-1 as a read replica in the same region as DB-1.
- C . Delete DB-2 so that DB-1 automatically reverts to the primary instance.
- D . Create DB-4 as a read replica in the same region as DB-1, and promote DB-4 to primary.
D
Explanation:
If you need to have the primary instance in the zone that had the outage, you can do a failback. A failback performs the same steps as the failover, only in the opposite direction, to reroute traffic back to the original instance. To perform a failback, use the procedure in Initiating failover.
https://cloud.google.com/sql/docs/mysql/high-availability#failback
You work in the logistics department. Your data analysis team needs daily extracts from Cloud SQL for MySQL to train a machine learning model. The model will be used to optimize next-day routes. You need to export the data in CSV format. You want to follow Google-recommended practices.
What should you do?
- A . Use Cloud Scheduler to trigger a Cloud Function that will run a select * from table(s) query to call the cloudsql.instances.export API.
- B . Use Cloud Scheduler to trigger a Cloud Function through Pub/Sub to call the cloudsql.instances.export API.
- C . Use Cloud Composer to orchestrate an export by calling the cloudsql.instances.export API.
- D . Use Cloud Composer to execute a select * from table(s) query and export results.
B
Explanation:
https://cloud.google.com/blog/topics/developers-practitioners/scheduling-cloud-sql-exports-using-cloud-functions-and-cloud-scheduler
You are managing a Cloud SQL for PostgreSQL instance in Google Cloud. You have a primary instance in region 1 and a read replica in region 2. After a failure of region 1, you need to make the Cloud SQL instance available again. You want to minimize data loss and follow Google-recommended practices.
What should you do?
- A . Restore the Cloud SQL instance from the automatic backups in region 3.
- B . Restore the Cloud SQL instance from the automatic backups in another zone in region 1.
- C . Check "Lag Bytes" in the monitoring dashboard for the primary instance in the read replica instance. Check the replication status using pg_catalog.pg_last_wal_receive_lsn(). Then, fail over to region 2 by promoting the read replica instance.
- D . Check your instance operational log for the automatic failover status. Look for time, type, and status of the operations. If the failover operation is successful, no action is necessary. Otherwise, manually perform gcloud sql instances failover.
C
Explanation:
https://cloud.google.com/sql/docs/postgres/replication/cross-region-replicas#disaster_recovery
Your company’s mission-critical, globally available application is supported by a Cloud Spanner database. Experienced users of the application have read and write access to the database, but new users are assigned read-only access to the database. You need to assign the appropriate Cloud Spanner Identity and Access Management (IAM) role to new users being onboarded soon.
What roles should you set up?
- A . roles/spanner.databaseReader
- B . roles/spanner.databaseUser
- C . roles/spanner.viewer
- D . roles/spanner.backupWriter
A
Explanation:
https://cloud.google.com/spanner/docs/iam?hl=it
You work for a financial services company that wants to use fully managed database services. Traffic volume for your consumer services products has increased annually at a constant rate with occasional spikes around holidays. You frequently need to upgrade the capacity of your database. You want to use Cloud Spanner and include an automated method to increase your hardware capacity to support a higher level of concurrency.
What should you do?
- A . Use linear scaling to implement the Autoscaler-based architecture
- B . Use direct scaling to implement the Autoscaler-based architecture.
- C . Upgrade the Cloud Spanner instance on a periodic basis during the scheduled maintenance window.
- D . Set up alerts that are triggered when Cloud Spanner utilization metrics breach the threshold, and then schedule an upgrade during the scheduled maintenance window.
A
Explanation:
Linear scaling is best used with load patterns that change more gradually or have a few large peaks. The method calculates the minimum number of nodes or processing units required to keep utilization below the scaling threshold. The number of nodes or processing units added or removed in each scaling event is not limited to a fixed step amount. https://cloud.google.com/spanner/docs/autoscaling-overview#linear
Your company is developing a global ecommerce website on Google Cloud. Your development team is working on a shopping cart service that is durable and elastically scalable with live traffic. Business disruptions from unplanned downtime are expected to be less than 5 minutes per month. In addition, the application needs to have very low latency writes. You need a data storage solution that has high write throughput and provides 99.99% uptime.
What should you do?
- A . Use Cloud SQL for data storage.
- B . Use Cloud Spanner for data storage.
- C . Use Memorystore for data storage.
- D . Use Bigtable for data storage.
B
Explanation:
google Cloud Spanner is a highly scalable, reliable, and fully managed relational database service that runs on Google’s infrastructure. It’s designed to handle large amounts of data and provide high availability, even in the face of failures. Spanner can be used to store and manage data for a variety of applications, including e-commerce websites. Spanner is a good choice for this scenario because it can handle high write throughput and provides 99.99% uptime. It’s also a good fit for applications that need to be highly available, even in the face of failures.
Your organization works with sensitive data that requires you to manage your own encryption keys. You are working on a project that stores that data in a Cloud SQL database. You need to ensure that stored data is encrypted with your keys.
What should you do?
- A . Export data periodically to a Cloud Storage bucket protected by Customer-Supplied Encryption Keys.
- B . Use Cloud SQL Auth proxy.
- C . Connect to Cloud SQL using a connection that has SSL encryption.
- D . Use customer-managed encryption keys with Cloud SQL.
You are a DBA on a Cloud Spanner instance with multiple databases.
You need to assign these privileges to all members of the application development team on a specific database:
– Can read tables, views, and DDL
– Can write rows to the tables
– Can add columns and indexes
– Cannot drop the database
What should you do?
- A . Assign the Cloud Spanner Database Reader and Cloud Spanner Backup Writer roles.
- B . Assign the Cloud Spanner Database Admin role.
- C . Assign the Cloud Spanner Database User role.
- D . Assign the Cloud Spanner Admin role.
C
Explanation:
https://cloud.google.com/spanner/docs/iam#spanner.databaseUser
You are configuring a brand new PostgreSQL database instance in Cloud SQL. Your application team wants to have an optimal and highly available environment with automatic failover to avoid any unplanned outage.
What should you do?
- A . Create one regional Cloud SQL instance with a read replica in another region.
- B . Create one regional Cloud SQL instance in one zone with a standby instance in another zone in the same region.
- C . Create two read-write Cloud SQL instances in two different zones with a standby instance in another region.
- D . Create two read-write Cloud SQL instances in two different regions with a standby instance in another zone.
B
Explanation:
This answer is correct because it meets the requirements of having an optimal and highly available environment with automatic failover. According to the Google Cloud documentation1, a regional Cloud SQL instance is an instance that has a primary server in one zone and a standby server in another zone within the same region. The primary and standby servers are kept in sync using synchronous replication, which ensures zero data loss and minimal downtime in case of a zonal outage or an instance failure. If the primary server becomes unavailable, Cloud SQL automatically fails over to the standby server, which becomes the new primary server1.
You plan to use Database Migration Service to migrate data from a PostgreSQL on-premises instance to Cloud SQL. You need to identify the prerequisites for creating and automating the task.
What should you do? (Choose two.)
- A . Drop or disable all users except database administration users.
- B . Disable all foreign key constraints on the source PostgreSQL database.
- C . Ensure that all PostgreSQL tables have a primary key.
- D . Shut down the database before the Data Migration Service task is started.
- E . Ensure that pglogical is installed on the source PostgreSQL database.
C, E
Explanation:
https://cloud.google.com/database-migration/docs/postgres/faq