This page was exported from Latest Exam Prep [ http://certify.vceprep.com ] Export date:Sat Sep 21 13:53:33 2024 / +0000 GMT ___________________________________________________ Title: [Feb 05, 2023] DP-203 Exam Dumps 100% Same Q&A In Your Real Exam [Q152-Q167] --------------------------------------------------- [Feb 05, 2023] DP-203 Exam Dumps 100% Same Q&A In Your Real Exam DP-203 Test Engine Dumps Training With 255 Questions Q152. You have an Azure Synapse Analytics workspace named WS1.You have an Azure Data Lake Storage Gen2 container that contains JSON-formatted files in the following format.You need to use the serverless SQL pool in WS1 to read the files.How should you complete the Transact-SQL statement? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.NOTE: Each correct selection is worth one point. Reference:https://docs.microsoft.com/en-us/azure/synapse-analytics/sql/query-single-csv-filehttps://docs.microsoft.com/en-us/sql/relational-databases/json/import-json-documents-into-sql-serverQ153. You have an Azure Synapse Analytics dedicated SQL pool that contains a table named Table1.You have files that are ingested and loaded into an Azure Data Lake Storage Gen2 container named container1.You plan to insert data from the files into Table1 and azure Data Lake Storage Gen2 container named container1.You plan to insert data from the files into Table1 and transform the data. Each row of data in the files will produce one row in the serving layer of Table1.You need to ensure that when the source data files are loaded to container1, the DateTime is stored as an additional column in Table1.Solution: You use a dedicated SQL pool to create an external table that has a additional DateTime column.Does this meet the goal?  Yes  No Q154. You need to collect application metrics, streaming query events, and application log messages for an Azure Databrick cluster.Which type of library and workspace should you implement? To answer, select the appropriate options in the answer area.NOTE: Each correct selection is worth one point. ExplanationYou can send application logs and metrics from Azure Databricks to a Log Analytics workspace. It uses the Azure Databricks Monitoring Library, which is available on GitHub.References:https://docs.microsoft.com/en-us/azure/architecture/databricks-monitoring/application-logsQ155. You have an Azure Synapse Analytics dedicated SQL pool mat contains a table named dbo.Users.You need to prevent a group of users from reading user email addresses from dbo.Users. What should you use?  row-level security  column-level security  Dynamic data masking  Transparent Data Encryption (TDD Q156. You are developing a solution using a Lambda architecture on Microsoft Azure.The data at test layer must meet the following requirements:Data storage:* Serve as a repository (or high volumes of large files in various formats.* Implement optimized storage for big data analytics workloads.* Ensure that data can be organized using a hierarchical structure.Batch processing:* Use a managed solution for in-memory computation processing.* Natively support Scala, Python, and R programming languages.* Provide the ability to resize and terminate the cluster automatically.Analytical data store:* Support parallel processing.* Use columnar storage.* Support SQL-based languages.You need to identify the correct technologies to build the Lambda architecture.Which technologies should you use? To answer, select the appropriate options in the answer area NOTE: Each correct selection is worth one point. Reference:https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-namespacehttps://docs.microsoft.com/en-us/azure/architecture/data-guide/technology-choices/batch-processinghttps://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-overview-what-isQ157. You plan to create an Azure Synapse Analytics dedicated SQL pool.You need to minimize the time it takes to identify queries that return confidential information as defined by the company’s data privacy regulations and the users who executed the queues.Which two components should you include in the solution? Each correct answer presents part of the solution.NOTE: Each correct selection is worth one point.  sensitivity-classification labels applied to columns that contain confidential information  resource tags for databases that contain confidential information  audit logs sent to a Log Analytics workspace  dynamic data masking for columns that contain confidential information ExplanationA: You can classify columns manually, as an alternative or in addition to the recommendation-based classification:* Select Add classification in the top menu of the pane.* In the context window that opens, select the schema, table, and column that you want to classify, and the information type and sensitivity label.* Select Add classification at the bottom of the context window.C: An important aspect of the information-protection paradigm is the ability to monitor access to sensitive data. Azure SQL Auditing has been enhanced to include a new field in the audit log called data_sensitivity_information. This field logs the sensitivity classifications (labels) of the data that was returned by a query. Here’s an example:Reference:https://docs.microsoft.com/en-us/azure/azure-sql/database/data-discovery-and-classification-overviewQ158. You have a Microsoft SQL Server database that uses a third normal form schema.You plan to migrate the data in the database to a star schema in an Azure Synapse Analytics dedicated SQI pool.You need to design the dimension tables. The solution must optimize read operations.What should you include in the solution? to answer, select the appropriate options in the answer area.NOTE: Each correct selection is worth one point. Reference:https://www.mssqltips.com/sqlservertip/5614/explore-the-role-of-normal-forms-in-dimensional-modeling/https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-identityQ159. You have a self-hosted integration runtime in Azure Data Factory.The current status of the integration runtime has the following configurations:* Status: Running* Type: Self-Hosted* Version: 4.4.7292.1* Running / Registered Node(s): 1/1* High Availability Enabled: False* Linked Count: 0* Queue Length: 0* Average Queue Duration. 0.00sThe integration runtime has the following node details:* Name: X-M* Status: Running* Version: 4.4.7292.1* Available Memory: 7697MB* CPU Utilization: 6%* Network (In/Out): 1.21KBps/0.83KBps* Concurrent Jobs (Running/Limit): 2/14* Role: Dispatcher/Worker* Credential Status: In SyncUse the drop-down menus to select the answer choice that completes each statement based on the information presented.NOTE: Each correct selection is worth one point. ExplanationBox 1: fail until the node comes back onlineWe see: High Availability Enabled: FalseNote: Higher availability of the self-hosted integration runtime so that it’s no longer the single point of failure in your big data solution or cloud data integration with Data Factory.Box 2: loweredWe see:Concurrent Jobs (Running/Limit): 2/14CPU Utilization: 6%Note: When the processor and available RAM aren’t well utilized, but the execution of concurrent jobs reaches a node’s limits, scale up by increasing the number of concurrent jobs that a node can run Reference:https://docs.microsoft.com/en-us/azure/data-factory/create-self-hosted-integration-runtimeQ160. You have a SQL pool in Azure Synapse.You discover that some queries fail or take a long time to complete.You need to monitor for transactions that have rolled back.Which dynamic management view should you query?  sys.dm_pdw_request_steps  sys.dm_pdw_nodes_tran_database_transactions  sys.dm_pdw_waits  sys.dm_pdw_exec_sessions ExplanationYou can use Dynamic Management Views (DMVs) to monitor your workload including investigating query execution in SQL pool.If your queries are failing or taking a long time to proceed, you can check and monitor if you have any transactions rolling back.Example:— Monitor rollbackSELECTSUM(CASE WHEN t.database_transaction_next_undo_lsn IS NOT NULL THEN 1 ELSE 0 END), t.pdw_node_id, nod.[type] FROM sys.dm_pdw_nodes_tran_database_transactions t JOIN sys.dm_pdw_nodes nod ON t.pdw_node_id = nod.pdw_node_id GROUP BY t.pdw_node_id, nod.[type] Reference:https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-manage-monitQ161. You have an Azure Data Factory pipeline that has the activities shown in the following exhibit.Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic.NOTE: Each correct selection is worth one point. Reference:https://datasavvy.me/2021/02/18/azure-data-factory-activity-failures-and-pipeline-outcomes/Q162. You have an Azure Synapse Analytics dedicated SQL pool that contains the users shown in the following table.User1 executes a query on the database, and the query returns the results shown in the following exhibit.User1 is the only user who has access to the unmasked data.Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic.NOTE: Each correct selection is worth one point. Reference:https://docs.microsoft.com/en-us/azure/azure-sql/database/dynamic-data-masking-overviewQ163. You have an Azure subscription that contains a logical Microsoft SQL server named Server1. Server1 hosts an Azure Synapse Analytics SQL dedicated pool named Pool1.You need to recommend a Transparent Data Encryption (TDE) solution for Server1. The solution must meet the following requirements:Track the usage of encryption keys.Maintain the access of client apps to Pool1 in the event of an Azure datacenter outage that affects the availability of the encryption keys.What should you include in the recommendation? To answer, select the appropriate options in the answer area.NOTE: Each correct selection is worth one point. Reference:https://docs.microsoft.com/en-us/azure/synapse-analytics/security/workspaces-encryptionhttps://docs.microsoft.com/en-us/azure/key-vault/general/loggingQ164. You have a SQL pool in Azure Synapse.You plan to load data from Azure Blob storage to a staging table. Approximately 1 million rows of data will be loaded daily. The table will be truncated before each daily load.You need to create the staging table. The solution must minimize how long it takes to load the data to the staging table.How should you configure the table? To answer, select the appropriate options in the answer area.NOTE: Each correct selection is worth one point. ExplanationGraphical user interface, application, table Description automatically generatedBox 1: HashHash-distributed tables improve query performance on large fact tables. They can have very large numbers of rows and still achieve high performance.Box 2: Clustered columnstoreWhen creating partitions on clustered columnstore tables, it is important to consider how many rows belong to each partition. For optimal compression and performance of clustered columnstore tables, a minimum of 1 million rows per distribution and partition is needed.Box 3: DateTable partitions enable you to divide your data into smaller groups of data. In most cases, table partitions are created on a date column.Partition switching can be used to quickly remove or replace a section of a table.Reference:https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-partitionhttps://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-distribuQ165. You need to design the partitions for the product sales transactions. The solution must mee the sales transaction dataset requirements.What should you include in the solution? To answer, select the appropriate options in the answer area.NOTE: Each correct selection is worth one point. Reference:https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-isQ166. You have a data model that you plan to implement in a data warehouse in Azure Synapse Analytics as shown in the following exhibit.All the dimension tables will be less than 2 GB after compression, and the fact table will be approximately 6 TB.Which type of table should you use for each table? To answer, select the appropriate options in the answer are a.NOTE: Each correct selection is worth one point. Q167. You have an Azure Synapse Analytics dedicated SQL pool.You run PDW_SHOWSPACEUSED(dbo,FactInternetSales’); and get the results shown in the following table.Which statement accurately describes the dbo,FactInternetSales table?  The table contains less than 1,000 rows.  All distribution contain data.  The table is skewed.  The table uses round-robin distribution. Data skew means the data is not distributed evenly across the distributions.Reference:https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-distribute Loading … DP-203 Practice Test Pdf Exam Material: https://www.vceprep.com/DP-203-latest-vce-prep.html --------------------------------------------------- Images: https://certify.vceprep.com/wp-content/plugins/watu/loading.gif https://certify.vceprep.com/wp-content/plugins/watu/loading.gif --------------------------------------------------- --------------------------------------------------- Post date: 2023-02-05 14:36:40 Post date GMT: 2023-02-05 14:36:40 Post modified date: 2023-02-05 14:36:40 Post modified date GMT: 2023-02-05 14:36:40