Practice Free Professional Cloud Database Engineer Exam Online Questions
You are designing a physician portal app in Node.js. This application will be used in hospitals and clinics that might have intermittent internet connectivity. If a connectivity failure occurs, the app should be able to query the cached data. You need to ensure that the application has scalability, strong consistency, and multi-region replication.
What should you do?
- A . Use Firestore and ensure that the PersistenceEnabled option is set to true.
- B . Use Memorystore for Memcached.
- C . Use Pub/Sub to synchronize the changes from the application to Cloud Spanner.
- D . Use Table.read with the exactStaleness option to perform a read of rows in Cloud Spanner.
A
Explanation:
https://firebase.google.com/docs/firestore/manage-data/enable-offline
You need to perform a one-time migration of data from a running Cloud SQL for MySQL instance in
the us-central1 region to a new Cloud SQL for MySQL instance in the us-east1 region. You want to follow Google-recommended practices to minimize performance impact on the currently running instance.
What should you do?
- A . Create and run a Dataflow job that uses JdbcIO to copy data from one Cloud SQL instance to another.
- B . Create two Datastream connection profiles, and use them to create a stream from one Cloud SQL instance to another.
- C . Create a SQL dump file in Cloud Storage using a temporary instance, and then use that file to import into a new instance.
- D . Create a CSV file by running the SQL statement SELECT…INTO OUTFILE, copy the file to a Cloud Storage bucket, and import it into a new instance.
C
Explanation:
https://cloud.google.com/sql/docs/mysql/import-export#serverless
Your company uses Bigtable for a user-facing application that displays a low-latency real-time dashboard. You need to recommend the optimal storage type for this read-intensive database.
What should you do?
- A . Recommend solid-state drives (SSD).
- B . Recommend splitting the Bigtable instance into two instances in order to load balance the concurrent reads.
- C . Recommend hard disk drives (HDD).
- D . Recommend mixed storage types.
A
Explanation:
if you plan to store extensive historical data for a large number of remote-sensing devices and then use the data to generate daily reports, the cost savings for HDD storage might justify the performance tradeoff. On the other hand, if you plan to use the data to display a real-time dashboard, it probably would not make sense to use HDD storage―reads would be much more frequent in this case, and reads that are not scans are much slower with HDD storage.
You need to redesign the architecture of an application that currently uses Cloud SQL for PostgreSQL. The users of the application complain about slow query response times. You want to enhance your application architecture to offer sub-millisecond query latency.
What should you do?
- A . Configure Firestore, and modify your application to offload queries.
- B . Configure Bigtable, and modify your application to offload queries.
- C . Configure Cloud SQL for PostgreSQL read replicas to offload queries.
- D . Configure Memorystore, and modify your application to offload queries.
D
Explanation:
"sub-millisecond latency" always involves Memorystore. Furthermore, as we are talking about a relational DB (Cloud SQL), BigTable is not a solution to be considered.
Your organization is running a MySQL workload in Cloud SQL. Suddenly you see a degradation in
database performance. You need to identify the root cause of the performance degradation.
What should you do?
- A . Use Logs Explorer to analyze log data.
- B . Use Cloud Monitoring to monitor CPU, memory, and storage utilization metrics.
- C . Use Error Reporting to count, analyze, and aggregate the data.
- D . Use Cloud Debugger to inspect the state of an application.
B
Explanation:
https://cloud.google.com/sql/docs/mysql/diagnose-issues#:~:text=If%20your%20instance%20stops%20responding%20to%20connections%20or%20perf ormance%20is%20degraded%2C%20make%20sure%20it%20conforms%20to%20the%20Operational %20Guidelines
Your company uses Cloud Spanner for a mission-critical inventory management system that is globally available. You recently loaded stock keeping unit (SKU) and product catalog data from a company acquisition and observed hot-spots in the Cloud Spanner database. You want to follow Google-recommended schema design practices to avoid performance degradation.
What should you do? (Choose two.)
- A . Use an auto-incrementing value as the primary key.
- B . Normalize the data model.
- C . Promote low-cardinality attributes in multi-attribute primary keys.
- D . Promote high-cardinality attributes in multi-attribute primary keys.
- E . Use bit-reverse sequential value as the primary key.
DE
Explanation:
https://cloud.google.com/spanner/docs/schema-design D because high cardinality means you have more unique values in the collumn. That’s a good thing for a hot-spotting issue. E because Spanner specifically has this feature to reduce hot spotting. Basically, it generates unique values https://cloud.google.com/spanner/docs/schema-design#bit_reverse_primary_key
D) Promote high-cardinality attributes in multi-attribute primary keys.
This is a correct answer because promoting high-cardinality attributes in multi-attribute primary keys can help avoid hotspots in Cloud Spanner. High-cardinality attributes are those that have many distinct values, such as UUIDs, email addresses, or timestamps1. By placing high-cardinality attributes first in the primary key, you can ensure that the rows are distributed more evenly across the key space, and avoid having too many requests sent to the same server2. E) Use bit-reverse sequential value as the primary key.
This is a correct answer because using bit-reverse sequential value as the primary key can help avoid hotspots in Cloud Spanner. Bit-reverse sequential value is a technique that reverses the bits of a monotonically increasing value, such as a timestamp or an auto-incrementing ID1. By reversing the bits, you can create a pseudo-random value that spreads the writes across the key space, and avoid having all the inserts occurring at the end of the table2.
You are building a data warehouse on BigQuery. Sources of data include several MySQL databases located on-premises.
You need to transfer data from these databases into BigQuery for analytics. You want to use a managed solution that has low latency and is easy to set up.
What should you do?
- A . Create extracts from your on-premises databases periodically, and push these extracts to Cloud Storage. Upload the changes into BigQuery, and merge them with existing tables.
- B . Use Cloud Data Fusion and scheduled workflows to extract data from MySQL. Transform this data into the appropriate schema, and load this data into your BigQuery database.
- C . Use Datastream to connect to your on-premises database and create a stream. Have Datastream write to Cloud Storage. Then use Dataflow to process the data into BigQuery.
- D . Use Database Migration Service to replicate data to a Cloud SQL for MySQL instance. Create federated tables in BigQuery on top of the replicated instances to transform and load the data into your BigQuery database.
You are starting a large CSV import into a Cloud SQL for MySQL instance that has many open connections. You checked memory and CPU usage, and sufficient resources are available. You want to follow Google-recommended practices to ensure that the import will not time out.
What should you do?
- A . Close idle connections or restart the instance before beginning the import operation.
- B . Increase the amount of memory allocated to your instance.
- C . Ensure that the service account has the Storage Admin role.
- D . Increase the number of CPUs for the instance to ensure that it can handle the additional import operation.
A
Explanation:
https://cloud.google.com/sql/docs/mysql/import-export#troubleshooting
Your company is evaluating Google Cloud database options for a mission-critical global payments gateway application. The application must be available 24/7 to users worldwide, horizontally scalable, and support open source databases. You need to select an automatically shardable, fully managed database with 99.999% availability and strong transactional consistency.
What should you do?
- A . Select Bare Metal Solution for Oracle.
- B . Select Cloud SQL.
- C . Select Bigtable.
- D . Select Cloud Spanner.
D
Explanation:
The application must be available 24/7 to users worldwide, horizontally scalable, and support open-source databases.
Your company is evaluating Google Cloud database options for a mission-critical global payments gateway application. The application must be available 24/7 to users worldwide, horizontally scalable, and support open source databases. You need to select an automatically shardable, fully managed database with 99.999% availability and strong transactional consistency.
What should you do?
- A . Select Bare Metal Solution for Oracle.
- B . Select Cloud SQL.
- C . Select Bigtable.
- D . Select Cloud Spanner.
D
Explanation:
The application must be available 24/7 to users worldwide, horizontally scalable, and support open-source databases.