70-475 | Microsoft 70-475 Free Practice Questions 2019

microsoft 70 475 for Microsoft certification, Real Success Guaranteed with Updated exam 70 475. 100% PASS 70-475 Designing and Implementing Big Data Analytics Solutions exam Today!

Free 70-475 Demo Online For Microsoft Certifitcation:

NEW QUESTION 1
You have a web app that accepts user input, and then uses a Microsoft Azure Machine Learning model to predict a characteristic of the user.
You need to perform the following operations:
70-475 dumps exhibit Track the number of web app users from month to month.
70-475 dumps exhibit Track the number of successful predictions made during the last minute.
70-475 dumps exhibit Create a dashboard showcasing the analytics tor the predictions and the web app usage.
Which lambda layer should you query for each operation? To answer, drag the appropriate layers to the correct operations. Each layer may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
70-475 dumps exhibit

    Answer:

    Explanation: Lambda architecture is a data-processing architecture designed to handle massive quantities of data by taking advantage of both batch- and stream-processing methods. This approach to architecture attempts to balance latency, throughput, and fault-tolerance by using batch processing to provide comprehensive and accurate views of batch data, while simultaneously using real-time stream processing to provide views of online data. The two view outputs may be joined before presentation
    Box 1: Speed
    The speed layer processes data streams in real time and without the requirements of fix-ups or completeness. This layer sacrifices throughput as it aims to minimize latency by providing real-time views into the most recent data.
    Box 2: Batch
    The batch layer precomputes results using a distributed processing system that can handle very large quantities of data. The batch layer aims at perfect accuracy by being able to process all available data when generating views.
    Box 3: Serving
    Output from the batch and speed layers are stored in the serving layer, which responds to ad-hoc queries by returning precomputed views or building views from the processed data.

    NEW QUESTION 2
    You have an Apache Storm cluster.
    The cluster will ingest data from a Microsoft Azure event hub.
    The event hub has the characteristics described in the following table.
    70-475 dumps exhibit
    You are designing the Storm application topology.
    You need to ingest data from all of the partitions. The solution must maximize the throughput of the data ingestion.
    Which setting should you use?

    • A. Partition Count
    • B. Message Retention
    • C. Partition Key
    • D. Shared access policies

    Answer: A

    NEW QUESTION 3
    You have a Microsoft Azure SQL database that contains Personally Identifiable Information (PII).
    To mitigate the PII risk, you need to ensure that data is encrypted while the data is at rest. The solution must minimize any changes to front-end applications.
    What should you use?

    • A. Transport Layer Security (TLS)
    • B. transparent data encryption (TDE)
    • C. a shared access signature (SAS)
    • D. the ENCRYPTBYPASSPHRASE T-SQL function

    Answer: B

    Explanation: Transparent data encryption (TDE) helps protect Azure SQL Database, Azure SQL Managed Instance, and Azure Data Warehouse against the threat of malicious activity. It performs real-time encryption and decryption of the database, associated backups, and transaction log files at rest without requiring changes to the application.
    References: https://docs.microsoft.com/en-us/azure/sql-database/transparent-data-encryption-azure-sql

    NEW QUESTION 4
    You have a Microsoft Azure Machine Learning Solution that contains several Azure Data Factory pipeline jobs.
    You discover that the jobs for a dataset named CustomerSalesData fails. You resolve the issue that caused the job to fail.
    You need to rerun the slices for CustomerSalesData. What should you do?

    • A. Run the Set-AzureRMDataFactorySliceStatus cmdlet and specify the–Status Retry parameter.
    • B. Run the Set-AzureRMDataFactorySliceStatus cmdlet and specify the–Status PendingExecution parameter.
    • C. Run the Resume-AzureRMDataFactoryPipeline cmdlet and specify the–Status Retry parameter.
    • D. Run the Resume-AzureRMDataFactoryPipeline cmdlet and specify the–Status PendingExecution parameter.

    Answer: B

    NEW QUESTION 5
    You have four on-premises Microsoft SQL Server data sources as described in the following table.
    70-475 dumps exhibit
    You plan to create three Azure data factories that will interact with the data sources as described in the following table.
    70-475 dumps exhibit
    You need to deploy Microsoft Data Management Gateway to support the Azure Data Factory deployment. The solution must use new servers to host the instances of Data Management Gateway.
    What is the minimum number of new servers and data management gateways you should you deploy? To answer, select the appropriate options in the answer area.
    NOTE: Each correct selection is worth one point.
    70-475 dumps exhibit

      Answer:

      Explanation: Box 1: 3
      Box 2: 3
      Considerations for using gateway

      NEW QUESTION 6
      You have data in an on-premises Microsoft SQL Server database.
      You must ingest the data in Microsoft Azure Blob storage from the on-premises SQL Server database by using Azure Data Factory.
      You need to identify which tasks must be performed from Azure.
      In which sequence should you perform the actions? To answer, move all of the actions from the list of actions to the answer area and arrange them in the correct order.
      NOTE: More than one order of answer choices is correct. You will receive credit for any of the correct orders you select.
      70-475 dumps exhibit

        Answer:

        Explanation: Step 1: Configure a Microsoft Data Management Gateway Install and configure Azure Data Factory Integration Runtime.
        The Integration Runtime is a customer managed data integration infrastructure used by Azure Data Factory to provide data integration capabilities across different network environments. This runtime was formerly called "Data Management Gateway".
        Step 2: Create a linked service for Azure Blob storage
        Create an Azure Storage linked service (destination/sink). You link your Azure storage account to the data factory.
        Step 3: Create a linked service for SQL Server
        Create and encrypt a SQL Server linked service (source)
        In this step, you link your on-premises SQL Server instance to the data factory. Step 4: Create an input dataset and an output dataset.
        Create a dataset for the source SQL Server database. In this step, you create input and output datasets. They represent input and output data for the copy operation, which copies data from the on-premises SQL Server database to Azure Blob storage.
        Step 5: Create a pipeline..
        You create a pipeline with a copy activity. The copy activity uses SqlServerDataset as the input dataset and AzureBlobDataset as the output dataset. The source type is set to SqlSource and the sink type is set to BlobSink.
        References: https://docs.microsoft.com/en-us/azure/data-factory/tutorial-hybrid-copy-powershell

        NEW QUESTION 7
        You need to design the data load process from DB1 to DB2. Which data import technique should you use in the design?

        • A. PolyBase
        • B. SQL Server Integration Services (SSIS)
        • C. the Bulk Copy Program (BCP)
        • D. the BULK INSERT statement

        Answer: C

        NEW QUESTION 8
        A company named Fabricam, Inc, has a web app hosted in Microsoft Azure. Millions of users visit the app daily.
        All of the user visits are logged in Azure Blob storage. Data analysts at Fabrikam built a dashboard that processes the user visit logs.
        Fabrikam plans to use an Apache Hadoop cluster on Azure HDInsight to process queries. The queries will access the data only once.
        You need to recommend a query execution strategy. What is the best to recommend using to achieve the goal?
        More than one answer choice may achieve the goal. Select the BEST answer.

        • A. Load the text files to ORC files, and then run dashboard queries on the ORC files.
        • B. Load the text files to sequence files, and then run dashboard queries on the sequence files.
        • C. Run the queries on the text files directly.
        • D. Load the text files to parquet files, and then run dashboard queries on the parquet files.

        Answer: B

        Explanation: File format versatility and Intelligent caching: Fast analytics on Hadoop have always come with one big catch: they require up-front conversion to a columnar format like ORCFile, Parquet or Avro, which is
        time-consuming, complex and limits your agility.
        With Interactive Query Dynamic Text Cache, which converts CSV or JSON data into optimized in-memory format on-the-fly, caching is dynamic, so the queries determine what data is cached. After text data is cached, analytics run just as fast as if you had converted it to specific file formats.
        References:
        https://azure.microsoft.com/en-us/blog/azure-hdinsight-interactive-query-simplifying-big-data-analytics-architec

        NEW QUESTION 9
        You are designing a solution based on the lambda architecture.
        You need to recommend which technology to use for the serving layer. What should you recommend?

        • A. Apache Storm
        • B. Kafka
        • C. Microsoft Azure DocumentDB
        • D. Apache Hadoop

        Answer: C

        Explanation: The Serving Layer is a bit more complicated in that it needs to be able to answer a single query request against two or more databases, processing platforms, and data storage devices. Apache Druid is an example of a cluster-based tool that can marry the Batch and Speed layers into a single answerable request.

        NEW QUESTION 10
        You need to ingest data from various data stores into a Microsoft Azure SQL data warehouse by using PolyBase.
        You create an Azure Data Factory.
        Which three components should you create next? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

        • A. an Azure Function
        • B. datasets
        • C. a pipeline
        • D. an Azure Batch account
        • E. linked services

        Answer: AE

        NEW QUESTION 11
        You have data pushed to Microsoft Azure Blob storage every few minutes.
        You want to use an Azure Machine Learning web service to score the data hourly. You plan to deploy the data factory pipeline by using a Microsoft.NET application. You need to create an output dataset for the web service.
        Which three properties should you define? Each correct answer presents part of the solution.
        NOTE: Each correct selection is worth one point.

        • A. Source
        • B. LinkedServiceName
        • C. TypeProperties
        • D. Availability
        • E. External

        Answer: ABC

        NEW QUESTION 12
        You use Microsoft Azure Data Factory to orchestrate data movements and data transformations within Azure. You plan to monitor the data factory to ensure that all of the activity slices run successfully. You need to identify a solution to rerun failed slices. What should you do?

        • A. From the Diagram tile on the Data Factory blade of the Azure portal, double-click the pipeline that has a failed slice.
        • B. Move the data factory to a different resource group.
        • C. From the Azure portal, select the Data slice blade, and then click Run.
        • D. Delete and recreate the data factory.

        Answer: B

        NEW QUESTION 13
        You are designing an Apache HBase cluster on Microsoft Azure HDInsight. You need to identify which nodes are required for the cluster.
        Which three nodes should you identify? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

        • A. Nimbus
        • B. Zookeeper
        • C. Region
        • D. Supervisor
        • E. Falcon
        • F. Head

        Answer: BCF

        Explanation: https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-hadoop-provision-linux-clusters

        NEW QUESTION 14
        You plan to deploy a storage solution to store the output of stream analytics. You plan to store the data for the following three types of data streams:
        70-475 dumps exhibit Unstructured JSON data
        70-475 dumps exhibit Exploratory analytics
        70-475 dumps exhibit Pictures
        You need to implement a storage solution for the data stream types.
        Which storage solution should you implement for each data stream type? To answer, drag the appropriate storage solutions to the correct data stream types. Each storage solution may be used once, more than once, or not at all. You may need to drag the split bar between the panes or scroll to view content.
        NOTE: Each correct selection is worth one point.
        70-475 dumps exhibit

          Answer:

          Explanation: Box 1: Azure Data Lake Store
          Stream Analytics supports Azure Data Lake Store. Azure Data Lake Store is an enterprise-wide hyper-scale repository for big data analytic workloads. Data Lake Store enables you to store data of any size, type and ingestion speed for operational and exploratory analytics. Stream Analytics has to be authorized to access the Data Lake Store.
          Box 2: Azure Cosmos DB
          Stream Analytics can target Azure Cosmos DB for JSON output, enabling data archiving and low-latency queries on unstructured JSON data.
          Box 3: Azure Blob Storage
          Blob storage offers a cost-effective and scalable solution for storing large amounts of unstructured data in the cloud.
          Incorrect Asnwers: Azure SQL Database:
          Azure SQL Database can be used as an output for data that is relational in nature or for applications that depend on content being hosted in a relational database. Stream Analytics jobs write to an existing table in an Azure SQL Database.
          Azure Service Bus Queue:
          Service Bus Queues offer a First In, First Out (FIFO) message delivery to one or more competing consumers. Typically, messages are expected to be received and processed by the receivers in the temporal order in which they were added to the queue, and each message is received and processed by only one message consumer.
          Azure Table Storage
          Azure Table storage offers highly available, massively scalable storage, so that an application can automatically scale to meet user demand. Table storage is Microsoft’s NoSQL key/attribute store, which one can leverage for structured data with fewer constraints on the schema. Azure Table storage can be used to store data for persistence and efficient retrieval.
          References: https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-define-outputs

          NEW QUESTION 15
          The settings used for slice processing are described in the following table.
          70-475 dumps exhibit
          If the slice processing fails, you need to identify the number of retries that will be performed before the slice execution status changes to failed.
          How many retries should you identify?

          • A. 2
          • B. 3
          • C. 5
          • D. 6

          Answer: C

          NEW QUESTION 16
          You have an analytics solution in Microsoft Azure that must be operationalized.
          You have the relevant data in Azure Blob storage. You use an Azure HDInsight Cluster to process the data. You plan to process the raw data files by using Azure HDInsight. Azure Data Factory will operationalize the
          solution.
          You need to create a data factory to orchestrate the data movement. Output data must be written back to Azure Blob storage.
          Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
          70-475 dumps exhibit

            Answer:

            Explanation: 70-475 dumps exhibit

            Recommend!! Get the Full 70-475 dumps in VCE and PDF From Surepassexam, Welcome to Download: https://www.surepassexam.com/70-475-exam-dumps.html (New 102 Q&As Version)