Practice Free Professional Cloud Database Engineer Exam Online Questions
Your team is building an application that stores and analyzes streaming time series financial data. You need a database solution that can perform time series-based scans with sub-second latency. The solution must scale into the hundreds of terabytes and be able to write up to 10k records per second and read up to 200 MB per second.
What should you do?
- A . Use Firestore.
- B . Use Bigtable
- C . Use BigQuery.
- D . Use Cloud Spanner.
B
Explanation:
Financial data, such as transaction histories, stock prices, and currency exchange rates.
https://cloud.google.com/bigtable/docs/overview#what-its-good-for
With SSD:
Reads – up to 10,000 rows per second
Writes – up to 10,000 rows per second
Scans – up to 220 MB/s
https://cloud.google.com/bigtable/docs/performance#typical-workloads
You are designing a database strategy for a new web application in one region. You need to minimize write latency.
What should you do?
- A . Use Cloud SQL with cross-region replicas.
- B . Use high availability (HA) Cloud SQL with multiple zones.
- C . Use zonal Cloud SQL without high availability (HA).
- D . Use Cloud Spanner in a regional configuration.
D
Explanation:
https://docs.google.com/forms/d/e/1FAIpQLSfZ77ZnuUL0NpU-bOtO5QUkC0cnRCe5YKMiubLXwfV3abBqkg/viewform
You recently launched a new product to the US market. You currently have two Bigtable clusters in one US region to serve all the traffic. Your marketing team is planning an immediate expansion to APAC. You need to roll out the regional expansion while implementing high availability according to Google-recommended practices.
What should you do?
- A . Maintain a target of 23% CPU utilization by locating:
cluster-a in zone us-central1-a
cluster-b in zone europe-west1-d
cluster-c in zone asia-east1-b - B . Maintain a target of 23% CPU utilization by locating:
cluster-a in zone us-central1-a
cluster-b in zone us-central1-b
cluster-c in zone us-east1-a - C . Maintain a target of 35% CPU utilization by locating:
cluster-a in zone us-central1-a
cluster-b in zone australia-southeast1-a
cluster-c in zone europe-west1-d
cluster-d in zone asia-east1-b - D . Maintain a target of 35% CPU utilization by locating:
cluster-a in zone us-central1-a
cluster-b in zone us-central2-a
cluster-c in zone asia-northeast1-b
cluster-d in zone asia-east1-b
D
Explanation:
https://cloud.google.com/bigtable/docs/replication-settings#regional-failover
You are managing a small Cloud SQL instance for developers to do testing. The instance is not critical and has a recovery point objective (RPO) of several days. You want to minimize ongoing costs for this instance.
What should you do?
- A . Take no backups, and turn off transaction log retention.
- B . Take one manual backup per day, and turn off transaction log retention.
- C . Turn on automated backup, and turn off transaction log retention.
- D . Turn on automated backup, and turn on transaction log retention.
C
Explanation:
https://cloud.google.com/sql/docs/mysql/backup-recovery/backups
Your customer is running a MySQL database on-premises with read replicas. The nightly incremental backups are expensive and add maintenance overhead. You want to follow Google-recommended practices to migrate the database to Google Cloud, and you need to ensure minimal downtime.
What should you do?
- A . Create a Google Kubernetes Engine (GKE) cluster, install MySQL on the cluster, and then import the dump file.
- B . Use the mysqldump utility to take a backup of the existing on-premises database, and then import it into Cloud SQL.
- C . Create a Compute Engine VM, install MySQL on the VM, and then import the dump file.
- D . Create an external replica, and use Cloud SQL to synchronize the data to the replica.
D
Explanation:
https://cloud.google.com/sql/docs/mysql/replication/configure-replication-from-external
You are migrating a telehealth care company’s on-premises data center to Google Cloud.
The migration plan specifies:
– PostgreSQL databases must be migrated to a multi-region backup configuration with cross-region replicas to allow restore and failover in multiple scenarios.
– MySQL databases handle personally identifiable information (PII) and require data residency compliance at the regional level.
You want to set up the environment with minimal administrative effort.
What should you do?
- A . Set up Cloud Logging and Cloud Monitoring with Cloud Functions to send an alert every time a new database instance is created, and manually validate the region.
- B . Set up different organizations for each database type, and apply policy constraints at the organization level.
- C . Set up Pub/Sub to ingest data from Cloud Logging, send an alert every time a new database instance is created, and manually validate the region.
- D . Set up different projects for PostgreSQL and MySQL databases, and apply organizational policy constraints at a project level.
Your company wants to migrate an Oracle-based application to Google Cloud. The application team currently uses Oracle Recovery Manager (RMAN) to back up the database to tape for long-term retention (LTR). You need a cost-effective backup and restore solution that meets a 2-hour recovery time objective (RTO) and a 15-minute recovery point objective (RPO).
What should you do?
- A . Migrate the Oracle databases to Bare Metal Solution for Oracle, and store backups on tapes on-premises.
- B . Migrate the Oracle databases to Bare Metal Solution for Oracle, and use Actifio to store backup files on Cloud Storage using the Nearline Storage class.
- C . Migrate the Oracle databases to Bare Metal Solution for Oracle, and back up the Oracle databases to Cloud Storage using the Standard Storage class.
- D . Migrate the Oracle databases to Compute Engine, and store backups on tapes on-premises.
B
Explanation:
https://www.actifio.com/solutions/cloud/google/
Your company wants to move to Google Cloud. Your current data center is closing in six months. You are running a large, highly transactional Oracle application footprint on VMWare. You need to design a solution with minimal disruption to the current architecture and provide ease of migration to Google Cloud.
What should you do?
- A . Migrate applications and Oracle databases to Google Cloud VMware Engine (VMware Engine).
- B . Migrate applications and Oracle databases to Compute Engine.
- C . Migrate applications to Cloud SQL.
- D . Migrate applications and Oracle databases to Google Kubernetes Engine (GKE).
A
Explanation:
https://cloud.google.com/blog/products/databases/migrate-databases-to-google-cloud-vmware-engine-gcve
Your company wants to move to Google Cloud. Your current data center is closing in six months. You are running a large, highly transactional Oracle application footprint on VMWare. You need to design a solution with minimal disruption to the current architecture and provide ease of migration to Google Cloud.
What should you do?
- A . Migrate applications and Oracle databases to Google Cloud VMware Engine (VMware Engine).
- B . Migrate applications and Oracle databases to Compute Engine.
- C . Migrate applications to Cloud SQL.
- D . Migrate applications and Oracle databases to Google Kubernetes Engine (GKE).
A
Explanation:
https://cloud.google.com/blog/products/databases/migrate-databases-to-google-cloud-vmware-engine-gcve
You have a large Cloud SQL for PostgreSQL instance. The database instance is not mission-critical, and you want to minimize operational costs.
What should you do to lower the cost of backups in this environment?
- A . Set the automated backups to occur every other day to lower the frequency of backups.
- B . Change the storage tier of the automated backups from solid-state drive (SSD) to hard disk drive (HDD).
- C . Select a different region to store your backups.
- D . Reduce the number of automated backups that are retained to two (2).
D
Explanation:
By default, for each instance, Cloud SQL retains seven automated backups, in addition to on-demand backups. You can configure how many automated backups to retain (from 1 to 365). We charge a lower rate for backup storage than for other types of instances. https://cloud.google.com/sql/docs/mysql/backup-recovery/backups