こんにちは、ねこです。

自称プログラマのおばちゃんのブログです。いろいろあるよねぇ~。

MCSA SQL Server 2016 /70-762/気づいた部分全体

2020-01-11 00:27:31 | プログラム 勉強

なんだか、ずっと時間がなくってちゃんとまとめてなかったですが、やっとTBC Nuggets全部見終わってノート書きます。

実際の過去問をやってから、もう一度見たので必要箇所がとってもよく分かった。

次に見るときにはテスト受かってから見たいな。テスト関係なくてもとっても役に立ちそうな内容がいっぱいありました。

【ステートメントによるパフォーマンスのあげ方】

  • seek is the best. second is scan, and hash. Seekが一番よい。次にScan、そしてなんもないHash。
  • If there is an index, it'll be scan without where clause. Then if you want to use seek to make it faster, you need to add where clause. IndexがあればScanできる。そしてそこに『WHERE』があって、結果がしぼりやすかったらもっとパフォーマンスがあがってSeekになる。 
  • Use order by clustered index to make it low cost because it's already asc sort. 『ORDER BY』を使うんだったら、CLUSTERED INDEXでソートしたほうがよい。
  • Altough there are correct index set, but it's still high cost becuase of aggregate (Group By). Then you want to create vew with schemabiding and the same aggregation statement alghtough you don't need to use it. also you need to create unique clustered index with View's group by column items.正しいインデックスセットはありますが、集計(グループ化)のためにコストが高くなります。 次に、schemabidingを使用してvewを作成し、同じ集計ステートメントを使用する必要はありません。 また、ビューの列ごとのグループ項目を使用して一意のクラスター化インデックスを作成する必要があります。
  • Do not forget to use it's "AS" name in View. アリアス名をViewで使おう。

[モニタリングパフォーマンス]
- Perfomance monitor
- Sql server profiler
   Trace in real time トレースがリアルタイムでできる。

- Extended Event
   chose session. セッションが選べる。
   chose event イベントが選べる。

[INDEXデザイン]
- set statictic id on/offとset statistic time on/offでロジカルリードなどの詳細なメッセージがみれる。
"read 6"->"read 2"---improved
- 一番くらべたいものをNonClustered Indexにする。これはいくつもパターンが作れます。もちろん、nonkeyコラムです。
- Includeには"Select"にあるものをパターン化して一緒にいれてやる。

[Designing Views]
- Indexed Views
  There is aggrigation/Join. 集計やJOINがふくまれている。
  Need WITH SCHIMABINDING.
  Need to create UNIQUE CLUSTERED INDEX with View's select statement. UNIQUE CLUSTERED INDEXが必要で、そこにはDesigning ViewのSelect ステートメントに含まれているものがINDEXされなければならない。
  This will improve performance.

とても良い方法で、まずIndexed ViewとUnique Clustered Indexをつくる。そして、その上でNonClustered Indexを足してやるとさらにパフォーマンスが良くなる。

- Partitioned views
   Use Union All
   Available to do Cross-database

とても良い方法で、まずはテーブルを作る(Checkを含む)そしてPartitioned viewをUnion Allをつかって作る。そのうえで、こんな感じで値をいっぺんにいれる。

- Updateable views
   Target base table only upodatable.
   Need WHITH CHECK OPTION for filtering   (ex. Where a = 2)

  1. Create a view joining both tables and showing all conlumns with WHERE and WITH CHECK OPTION statemsnt.
  2. Add the INSTEAD OF UPDATE statement to the view.
  3. Allow the application to query and update records directly for the table-values functon.

 

 

[Designing Constraints]
- table constraints
   you can name whatever you want, so that it makes easy to be maintain. Otherwise column constraints name will be named by machine.
  Constraints is such as "Primary Key/Forienkey" and you can set more than 2.
- colulmn constraints
  check, default, not null etc.


===============
[columnstore]
http://etlpoint.com/key-differences-between-an-oltp-system-and-a-data-warehouse/

Columnstore indexes are the standard for storing and querying large data warehousing fact tables

Columnstoreは、大規模なdataware house (OLAPのこと)のFACTテーブルを格納およびクエリするための標準です

- dataware house (OLAPのこと)
   Data warehouses are designed to accommodate ad hoc queries and data analysis.  data warehouses are not usually in third normal form (3NF)
-  OLTP
   in third normal form (3NF), a type of data normalization common in OLTP environments. 

- clustered columnstore index
  Physical Storage for the entire tale and is the only index (Delta store)

- non-clustered columnstore index 
  Readd-only. Good for Anlysis qureries while the same time performing read-only operations.

[Stored Procedure]
-  input/output
- error hundling
- return by conditions
- READONLY Typeを宣言するときに必要
- scope_identity() プロシージャのなかで作成された値をつかむ
- type Typeを作って、テンポラリーに値をいれ、それをプロシージャの中でつかう
- begin transact (try/catch(rollback)), commit transact

[TRIGGERS]
- good for auditing
- Instead of Insert
。たとえば、updatable viewがあって、SupplireIDの代わりにViewのなかにある直接SuplireNameをinsertするようなことがあるとしましょう。実際にUpdatableViewでつかわれているItemテーブルにはSupplireIDが必要です。なので、このupdatable viewのInsertが呼ばれた時点で、このInstead Triggerが先によばれ、Triggerの中でSupplireIDをSelectで見つけItemテーブルにSupplireIDとか、必要なコラムの値をInsertさせます。するとそのあとに実行されるupdatable viewのInsertでは、きちんとSupplireIDがそろった状態のItemテーブルでInsertすることができます。
- When Updatable View is included more than one join and needs to be insert, you can use Instead of Insert for set all needs-update-column of Updatatalbe View.

[Lock]
- Impricit transaction ON/OFF
  Set Impricit Transaction ON -> @@TRANCOUNT = 1 (Lock状態で、selectではぐるぐるまわっている。) -> COMMIT TRANSACTION (manually) -> @@TRANCOUT = 0 (transaction commited)
- XACT_STATE <> 0 はオープントランザクションといういみ。
- FK relationship means that you can NOT delete or update.
 -SET XACT_ABORT はタイムアウトがおきると、すべてのトランザクションを回避する。

[Isoration Level]
uncommited read -> Dirty read (Without Commit, it's shown at Select result)
Commited Read -> Default. Dirty read won't show up. まっしろけっけ。
Repeatable Read -> Can prevent Dirty read, but Phantom read.
Sirializable Read -> most pesimistic read. no dirty read, no Phontom read.

ALL_SNAPSHOT_ISORATION -> Isoration for transaction, read
READ_COMMITED_SNAPSHOT -> Isoration for statement

[Memory-Optimized Table]
1. Alter database with Filtergroup and MEMORY_OPTIMIZED_DATA,  also you need to set the file locaiton and file name to filegroup
2. For create table, you need to add WITH (MEMORY_OPTIMIZED=ON) and WITH (MOMORY_OPTIMIZED=ON, DURABILITY=SCHIMA_ONLY) 
3. Durability default is shcima and data is persistant after shutting down or restart. shima_only is no data persistant(temp, ETL style operation, non persistant).

[Natively-compiled stored procedure]
- Need SHCIMABINDING
- BEGIN ATOMIC WITH (TRANSACTION ISORATION LEVEL, LANGUAGE)
  DURIBILITYはオン
 BEGIN ATOMICのおかげで、BIGIN COMMIT がいらない。
Memory-Optimized table とくみあわせてつかう。30%ぐらい早くなる。
- Cannot use subquery.
- Cannot use NOCOUNT ON

 

[Implicit Transaction]

When ON, the system is in implicit transaction mode. This means that if @@TRANCOUNT = 0, any of the following Transact-SQL statements begins a new transaction. It is equivalent to an unseen BEGIN TRANSACTION being executed first:

When OFF, each of the preceding T-SQL statements is bounded by an unseen BEGIN TRANSACTION and an unseen COMMIT TRANSACTION statement. When OFF, we say the transaction mode is autocommit. If your T-SQL code visibly issues a BEGIN TRANSACTION, we say the transaction mode is explicit.

ONの場合、システムは「暗黙的」トランザクションモードになります。 つまり、@@ TRANCOUNT = 0の場合、次のTransact-SQLステートメントのいずれかが新しいトランザクションを開始します。 これは、最初に実行される目に見えないBEGIN TRANSACTIONと同等です。

OFFの場合、先行する各T-SQLステートメントは、表示されないBEGIN TRANSACTIONおよび表示されないCOMMIT TRANSACTIONステートメントによって制限されます。 OFFの場合、トランザクションモードは「自動コミット」です。 T-SQLコードが目に見えてBEGIN TRANSACTIONを発行する場合、トランザクションモードは明示的であると言います。←よってBEGIN TRANSACTION/COMMIT TRANSACTIONはない!

 

休憩中に武田真治さんのしくじり先生を見ました。とっても心に響きましたね。人と比べることや、比べられることにかき回され、いい人ほど完璧に期待を裏切らないようにしようと頑張りすぎちゃう。

武田さんの言う通りできない自分を認めて、自分の人生を生きなきゃね。


MCSA SQL Server 2016 /70-762/似た者同士・モニタリング

2020-01-10 05:07:01 | プログラム 勉強

【モニタリングに関する問題】

1.

You are a database developer for a company. The company has a server that has multiple physical disks. The disks are not part of a RAID array. The server hosts three Microsoft SQL Server instances. There are many SQL jobs that run during off-peak hours.
You must monitor and optimize the SQL Server to maximize throughput, response time, and overall SQL performance.
You need to identify previous situations where a modification has prevented queries from selecting data in tables.
What should you do?

A. Create a sys.dm_os_waiting_tasks query.

B. Create a sys.dm_exec_sessions query.

C. Create a Performance Monitor Data Collector Set.

D. Create a sys.dm_os_memory_objects query.

E. Create a sp_configure "˜max server memory' query.

F. Create a SQL Profiler trace.

G. Create a sys.dm_os_wait_stats query.

H. Create an Extended Event.

Correct Answer: G
sys.dm_os_wait_stats returns information about all the waits encountered by threads that executed. You can use this aggregated view to diagnose performance issues with SQL Server and also with specific queries and batches.


2.

You are a database developer for a company. The company has a server that has multiple physical disks. The disks are not part of a RAID array. The server hosts three Microsoft SQL Server instances. There are many SQL jobs that run during off-peak hours.
You observe that many deadlocks appear to be happening during specific times of the day.
You need to monitor the SQL environment and capture the information about the processes that are causing the deadlocks. Captured information must be viewable as the queries are running.
What should you do?

A. A. Create a sys.dm_os_waiting_tasks query.

B. Create a sys.dm_exec_sessions query.

C. Create a PerformanceMonitor Data Collector Set.

D. Create a sys.dm_os_memory_objects query.

E. Create a sp_configure "˜max server memory' query.

F. Create a SQL Profiler trace.

G. Create a sys.dm_os_wait_stats query.

H. Create an Extended Event.

Correct Answer: F
To view deadlock information, the Database Engine provides monitoring tools in the form of two trace flags, and the deadlock graph event in SQL Server Profiler.
Trace Flag 1204 and Trace Flag 1222
When deadlocks occur, trace flag 1204 and trace flag 1222 return information that is captured in the SQL Server error log. Trace flag 1204 reports deadlock information formatted by each nodeinvolved in the deadlock. Trace flag 1222 formats deadlock information, first by processes and then by resources. It is possible to enable bothtrace flags to obtain two representations of the same deadlock event.
References:
https://technet.microsoft.com/en-us/library/ms178104(v=sql.105).aspx


3.

You are a database developer for a company. The company has a server that has multiple physical disks. The disks are not part of a RAID array. The server hosts three Microsoft SQL Server instances. There are many SQL jobs that run during off-peak hours.
You must monitor the SQL Server instances in real time and optimize the server to maximize throughput, response time, and overall SQL performance.
What should you do?

A. Create asys.dm_os_waiting_tasks query.

B. Create a sys.dm_exec_sessions query.

C. Create a Performance Monitor Data Collector Set.

D. Create a sys.dm_os_memory_objects query.

E. Create a sp_configure "˜max server memory' query.

F. Create a SQL Profiler trace.

G. Create a sys.dm_os_wait_stats query.

H. Create an Extended Event.

Correct Answer: B
sys.dm_exec_sessions returns one row per authenticated session on SQL Server. sys.dm_exec_sessions is a server-scope view that shows information about all active user connections and internal tasks. This information includes client version, client program name, client login time, login user, current session setting, and more. Use sys.dm_exec_sessions to first view the current system load and to identify a session of interest, and then learn more information about that session by using other dynamic management views or dynamic management functions.
Examples of use include finding long-running cursors, and finding idle sessions that have open transactions.


4.

You are a database developer for a company. The company has a server that has multiple physical disks. The disks are not part of a RAID array. The server hosts three Microsoft SQL Server instances. There are many SQL jobs that run during off-peak hours.
You must monitor the SQL Server instances in real time and optimize the server to maximize throughput, response time, and overall SQL performance.
You need to ensure that the performance of each instance is consistent for the same queried and query plans.
What should you do?

A. Create a sys.dm_os_waiting_tasks query.

B. Create a sys.dm_exec_sessions query.

C. Create a Performance Monitor Data Collector Set.

D. Create a sys.dm_os_memory_objects query.

E. Create a sp_configure "˜max server memory' query.

F. Create a SQL Profiler trace.

G. Create asys.dm_os_wait_stats query.

H. Create an Extended Event.

Correct Answer: H
Advanced Viewing of Target Data from Extended Events in SQL Server
When your event session is currently active, you might want to watch the event data in real time, as it is received by the target.
Management > Extended Events > Sessions > [your-session] > Watch Live Data.
The query_post_execution_showplan extended event enables you to see the actual query plan in the SQL Server Management Studio (SSMS) UI. When the
Details pane is visible, you can see a graph of the query plan on the Query Plan tab. By hovering over a node on the query plan, you cansee a list of property names and their values for the node.

References:
https://msdn.microsoft.com/en-us/library/mt752502.aspx


5.

You are a database developer for a company. The company has a server that has multiple physical disks. The disks are not part of a RAID array. The server hosts three SQL Server instances. There are many SQL jobs that run during off-peak hours.
You must monitor the SQL Server instances in real time and optimize the server to maximize throughput, response time, and overall SQL performance.
You need to collect query performance data while minimizing the performance impact on the SQL Server.
What should you do?

A. Create a sys.dm_os_waiting_tasks query.

B. Create a sys.dm_exec_sessions query.

C. Create a Performance Monitor Data Collector Set.

D. Create a sys.dm_os_memory_objects query.

E. Create a sp_configure "˜max server memory' query.

F. Create a SQL Profiler trace.

G. Create a sys.dm_os_wait_stats query.

H. Create an Extended Event.

Correct Answer: C
SQL Server Data Collector is a feature for performance monitoring and tuning available in SQL Server Management Studio.
Integration Services packages transform and load the collected data into the Microsoft Data Warehouse database.
Collection sets are defined and deployed on a server instance and can be run independently of each other. Each collection set can be applied to a target that matches the target types of all the collector types that are part of a collection set. The collection set is run by a SQL Server Agent job or jobs, and data is uploaded to the management data warehouse on a predefined schedule.
Predefined data collection sets include:
✑ The Query Statistics data collection set collects information about query statistics, activity, execution plans and text on the SQL Server instance. It does not store all executed statements, only 10 worst performing ones.
✑ Disk Usage data collection set collects information about disk space used by both data and log files for all databases on the SQL Server instance, growth trends, and average day growth.
Etc.
References:
http://www.sqlshack.com/sql-server-performance-monitoring-data-collector/


6.

You are a database developer for a company. The company has a server that has multiple physical disks. The disks are not part of a RAID array. The server hosts three SQL Server instances. There are many SQL jobs that run during off-peak hours.
You must monitor the SQL Server instances in real time and optimize the server to maximize throughput, response time, and overall SQL performance.
You need to create a baseline set of metrics to report how the computer running SQL Server operates under normal load. The baseline must include the resource usage associated with the server processes.
What should you do?

A. Create a sys.dm_os_waiting_tasks query.

B. Create a sys.dm_exec_sessions query.

C. Create a Performance Monitor Data Collector Set.

D. Create a sys.dm_os_memory_objects query.

E. Create a sp_configure "˜max server memory' query.

F. Create a SQL Profiler trace.

G. Create asys.dm_os_wait_stats query.

H. Create an Extended Event.

Correct Answer: D
sys.dm_os_memory_objects returns memory objects that are currently allocated by SQL Server. You can usesys.dm_os_memory_objects to analyze memory use and to identify possible memory leaks.
Example: The following example returns the amount of memory allocated by each memory object type.

SELECT SUM (pages_in_bytes) as 'Bytes Used', type
FROMsys.dm_os_memory_objects
GROUP BY type
ORDER BY 'Bytes Used' DESC;
GO


7.

You are experiencing performance issues with the database server.
You need to evaluate schema locking issues, plan cache memory pressure points, and backup I/O problems.
What should you create?

A. a System Monitor report

B. a sys.dm_tran_database_transaction dynamic management view query

C. an Extended Events session that uses Query Editor

D. an Activity Monitor session in Microsoft SQL Management Studio.

Correct Answer: D
References:
https://msdn.microsoft.com/en-us/library/hh212951.aspx


8.

You use Microsoft SQL Server Profile to evaluate a query named Query1. The Profiler report indicates the following issues:
✑ At each level of the query plan, a low total number of rows are processed.
✑ The query uses many operations. This results in a high overall cost for the query.
You need to identify the information that will be useful for the optimizer.
What should you do?

A. Start a SQL Server Profiler trace for the event class Performance statistics in the Performance event category.

B. Create one Extended Events session with the sqlserver.missing_column_statistics event added.

C. Start a SQL Server Profiler trace for the event class Soft Warnings in the Errors and Warnings event category.

D. Create one Extended Events session with the sqlserver.error_reported event added.

Correct Answer: A
The Performance Statistics event class can be used to monitor the performance of queries, stored procedures, and triggers that are executing. Each of the six event subclasses indicates an event in the lifetime of queries, stored procedures, and triggers within the system. Using the combination of these event subclasses and the associated sys.dm_exec_query_stats, sys.dm_exec_procedure_stats and sys.dm_exec_trigger_stats dynamic management views, you can reconstitute the performance history of any given query, stored procedure, or trigger.
References:
https://docs.microsoft.com/en-us/sql/relational-databases/event-classes/performance-statistics-event-class?view=sql-server-2017


9.

You have an existing Microsoft SQL Trace script. You plan to convert the script to an Extended Events session.
You need to collect the trace ID and other required information.
Which system table should you use?

A. dbo.sysnotifications

B. dbo.syssessions

C. trace_xe_action_map

D. sysdbmainplans

Correct Answer: C
References:
https://docs.microsoft.com/en-us/sql/relational-databases/extended-events/convert-an-existing-sql-trace-script-to-an-extended-events-session?view=sql-server-
2017


10.

You are a database developer for a company. The company has a server that has multiple physical disks. The disks are not part of a RAID array. The server hosts three Microsoft SQL Server instances. There are many SQL jobs that run during off-peak hours.
You must monitor and optimize the SQL Server to maximize throughput, response time, and overall SQL performance.
You need to examine delays in executed threads, including errors with queries and batches.
What should you do?

A. Create a sys.dm_os_waiting_tasks query.

B. Create a sys.dm_exec_sessions query.

C. Create a Performance Monitor Data Collector Set.

D. Create a sys.dm_os_memory_objects query.

E. Create a sp_configure "˜max server memory'query.

F. Create a SQL Profiler trace.

G. Create a sys.dm_os_wait_stats query.

H. Create an Extended Event.

Correct Answer: G
sys.dm_os_wait_stats returns information about all the waits encountered by threads that executed. You can use this aggregated view to diagnose performance issues with SQL Server and also with specific queries and batches.
Incorrect Answers:
A: sys.dm_os_waiting_tasks returns information about the wait queue of tasks that are waiting on some resource.
References:
https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-os-wait-stats-transact-sql


11.

You are experiencing performance issues with the database server.
You need to evaluate schema locking issues, plan cache memory pressure points, and backup I/O problems.
What should you create?

A. a System Monitor report

B. a sys.dm_tran_database_transaction dynamic management view query

C. an Extended Events session that uses Query Editor

D. a Microsoft SQL Profiler trace

Correct Answer: C
Extended Events: considered as "the best way" by the SQL Server purists. You can configure Extended Events to find Locking Issues in SQL Server.
Incorrect Answers:
D: SQL Trace and SQL Server Profiler are deprecated. The Microsoft.SqlServer.Management.Trace namespace that contains the Microsoft SQL Server Trace and Replay objects are also deprecated.
This feature is in maintenance mode and may be removed in a future version of Microsoft SQL Server. Avoid using this feature in new development work, and plan to modify applications that currently use this feature.
Use Extended Events instead.
References:
https://www.mssqltips.com/sqlservertip/5752/configuring-extended-events-to-find-locking-issues-in-sql-server/


12.

You have multiple queries that take a long time to complete.
You need to identify the cause by using detailed information about the Transact-SQL statements in the queries. The Transact-SQL statements must not run as part of the analysis.
Which Transact-SQL statement should you run?

A. SET STATISTICS IO ON

B. SET SHOWPLAN_TEXT ON

C. SET STATISTICS XML ON

D. SET STATISTICS PROFILE ON

Correct Answer: B
SET SHOWPLAN_TEXT ON causes Microsoft SQL Server not to execute Transact-SQL statements. Instead, SQL Server returns detailed information about how the statements are executed.
Incorrect Answers:
C: SET STATISTICS XML ON causes Microsoft SQL Server to execute Transact-SQL statements and generate detailed information about how the statements were executed in the form of a well-defined XML document.
D: When STATISTICS PROFILE is ON, each executed query returns its regular result set, followed by an additional result set that shows a profile of the query execution.
The additional result set contains the SHOWPLAN_ALL columns for the query and these additional columns.

Column name, Description -
Rows: Actual number of rows produced by each operator
Executes: Number of times the operator has been executed
References:
https://docs.microsoft.com/en-us/sql/t-sql/statements/set-showplan-text-transact-sql


 


MCSA SQL Server 2016 /70-762/似た者同士・INDEX(ボックス問題以外の問題)

2020-01-10 04:48:11 | プログラム 勉強

1.

You work on an OLTP database that has no memory-optimized file group defined.
You have a table names tblTransaction that is persisted on disk and contains the information described in the following table:

Users report that the following query takes a long time to complete.

You need to create an index that:
- improves the query performance
- does not impact the existing index
- minimizes storage size of the table (inclusive of index pages).
What should you do?

A. Create aclustered index on the table.

B. Create a nonclustered index on the table.

C. Create a nonclustered filtered index on the table.

D. Create a clustered columnstore index on the table.

E. Create a nonclustered columnstore index on the table.

F. Create a hashindex on the table.

Correct Answer: C
A filtered index is an optimized nonclustered index, especially suited to cover queries that select from a well-defined subset of data. It uses a filter predicate to index a portion of rows in the table. A well-designed filtered index can improve query performance, reduce index maintenance costs, and reduce index storage costs compared with full-table indexes.


2.

You have a database named DB1. There is no memory-optimized filegroup in the database.
You run the following query:

The following image displays the execution plan the query optimizer generates for this query:

Users frequently run the same query with different values for the local variable @lastName. The table named Person is persisted on disk.
You need to create an index on the Person.Person table that meets the following requirements:
1. All users must be able to benefit from the index.
2. FirstName must be added to the index as an included column.
What should you do?

A. Create a clustered index on the table.

B. Create a nonclustered index on the table.

C. Create a nonclustered filtered index on the table.

D. Create a clustered columnstore index on the table.

E. Create a nonclustered columnstore index on the table.

F. Create a hash index on the table.

Correct Answer: B
By including nonkey columns, you can create nonclustered indexes that cover more queries. This is because the nonkeycolumns have the following benefits:
They can be data types not allowed as index key columns.
They are not considered by the Database Engine when calculating the number of index key columns or index key size.
Reference:
https://docs.microsoft.com/en-us/sql/relational-databases/indexes/create-indexes-with-included-columns?view=sql-server-2017


3.

You have a reporting database that includes a non-partitioned fact table named Fact_Sales. The table is persisted on disk.
Users report that their queries take a long time to complete. The system administrator reports that the table takes too much space in the database. You observe that there are no indexes defined on the table, and many columns have repeating values.
You need to create the most efficient index on the table, minimize disk storage and improve reporting query performance.
What should you do?

A. Create a clustered index on the table.

B. Create a nonclustered index on the table.

C. Create a nonclustered filtered index on the table.

D. Create a clustered columnstore index on the table.

E. Create a nonclustered columnstore index on the table.

F. Create a hash index on the table.

Correct Answer: D
The columnstore index is the standard for storing and querying largedata warehousing fact tables. It uses column-based data storage and query processing to achieve up to 10x query performance gains in your data warehouse over traditional row-oriented storage, and up to 10x data compression over the uncompressed data size.
A clustered columnstore index is the physical storage for the entire table.


4.

You have a database named DB1. The database does not have a memory optimized filegroup.
You create a table by running the following Transact-SQL statement:

The table is currently used for OLTP workloads. The analytics user group needs to perform real-time operational analytics that scan most of the records in the table to aggregate on a number of columns.
You need to add the most efficient index to support the analytics workload without changing the OLTP application.
What should you do?

A. Create a clustered indexon the table.

B. Create a nonclustered index on the table.

C. Create a nonclustered filtered index on the table.

D. Create a clustered columnstore index on the table.

E. Create a nonclustered columnstore index on the table.

F. Create a hash index on the table.

Correct Answer: E
A nonclustered columnstore index enables real-time operational analytics in which the OLTP workload uses the underlying clustered index, while analytics run concurrently on the columnstore index.
Columnstore indexes can achieve up to 100xbetter performance on analytics and data warehousing workloads and up to 10x better data compression than traditional rowstore indexes. These recommendations will help your queries achieve the very fast query performance that columnstore indexes are designed to provide.
References:
https://msdn.microsoft.com/en-us/library/gg492088.aspx


 5.

You have a database named DB1. There is no memory-optimized filegroup in the database.
You have a table and a stored procedure that were created by running the following Transact-SQL statements:

The Employee table is persisted on disk. You add 2,000 records to the Employee table.
You need to create an index that meets the following requirements:
✑ Optimizes the performance of the stored procedure.
✑ Covers all the columns required from the Employee table.
✑ Uses FirstName and LastName as included columns.
✑ Minimizes index storage size and index key size.
What should you do?

A. Create a clustered index on the table.

B. Create a nonclustered index on the table.

C. Create a nonclustered filtered index on the table.

D. Create a clustered columnstore index on the table.

E. Create a nonclustered columnstore index on the table.

F. Create a hash index on the table.

Correct Answer: B


 6.

You run the following Transact-SQL following statement:

Customer records may be inserted individually or in bulk from an application.
You observe that the application attempts to insert duplicate records.
You must ensure that duplicate records are not inserted and bulk insert operations continue without notifications.
Which Transact-SQL statement should you run?

A. CREATE UNIQUE NONCLUSTERED INDEX IX_Customer_Code ON Customer (Code) WITH (ONLINE = OFF)

B. CREATE UNIQUE INDEX IX_CUSTOMER_Code O Customer (Code) WITH (IGNORE_DUP_KEY = ON)

C. CREATE UNIQUE INDEX IX Customer Code ON Customer (Code) WITH (IGNORE DUP KEY =OFF)

D. CREATE UNIQUE NONCLUSTERED INDEX IX_Customer_Code ON Customer (Code)

E. CREATE UNIQUE NONCLUSTERED INDEX IX_Customer_Code ON Customer (Code) WITH (ONLINE = ON)

Correct Answer: B
IGNORE_DUP_KEY = { ON | OFF } specifies the error response when an insert operation attempts to insert duplicate key values into a unique index. The
IGNORE_DUP_KEY option applies only to insert operations after the index is created or rebuilt. The option has no effect when executing CREATE INDEX, ALTER
INDEX, or UPDATE. The default is OFF.
Incorrect Answers:
ONLINE = { ON | OFF } specifies whether underlying tables and associated indexes are available for queries and data modification during the index operation. The default is OFF.
References:
https://docs.microsoft.com/en-us/sql/t-sql/statements/create-index-transact-sql?view=sql-server-2017


7.

You are developing a database reporting solution for a table that contains 900 million rows and is 103 GB.
The table is updated thousands of times a day, but data is not deleted.
The SELECT statements vary in the number of columns used and the amount of rows retrieved.
You need to reduce the amount of time it takes to retrieve data from the table. The must prevent data duplication.
Which indexing strategy should you use?

A. a nonclustered index for each column in the table

B. a clustered columnstore index for the table

C. a hash index for the table

D. a clustered index for the table and nonclustered indexes for nonkey columns

Correct Answer: B
Columnstore indexes are the standard for storing and querying large data warehousing fact tables. It uses column-based data storage and query processing to achieve up to 10x query performance gains in your data warehouse over traditional row-oriented storage.
A clustered columnstore index is the physical storage for the entire table.
Generally, you should define the clustered index key with as few columns as possible.
A nonclustered index contains the index key values and row locators that point to the storage location of the table data. You can create multiple nonclustered indexes on a table or indexed view. Generally, nonclustered indexes should be designed to improve the performance of frequently used queries that are not covered by the clustered index.
References:
https://docs.microsoft.com/en-us/sql/relational-databases/indexes/columnstore-indexes-overview?view=sql-server-2017


 


MCSA SQL Server 2016 /70-762/似た者同士・DeadLock/ISOLATION LEVEL

2020-01-09 10:10:48 | プログラム 勉強

【DeadLock&ISOLATION LEVELに関する問題】

1.

You have a relational data warehouse that contains 1 TB of data.
You have a stored procedure named usp_sp1 that generated an HTML fragment. The HTML fragment contains color and font style.
You need to return the HTML fragment.
What should you do?

A. Use the NOLOCK option.

B. Execute the DBCC UPDATEUSAGE statement.

C. Use the max worker threads option.

D. Use a table-valued parameter.

E. Set SET ALLOW_SNAPSHOT_ISOLATION to ON.

F. Set SET XACT_ABORT to ON.

G. Execute the ALTER TABLE T1 SET (LOCK_ESCALATION = AUTO); statement.

H. Use the OUTPUT parameters.

Correct Answer: G
A SQL Server stored procedure that you can call is one that returns one or more OUT parameters, which are parameters that the stored procedure uses to return data back to the calling application.


2.

Your company runs end-of-the-month accounting reports. While the reports run, other financial records are updated in the database.
Users report that the reports take longer than expected to run.
You need to reduce the amount of time it takes for the reports to run. The reports must show committed data only.
What should you do?

A. Use the NOLOCK option.

B. Execute the DBCC UPDATEUSAGE statement.

C. Use the max worker threads option.

D. Use a table-valued parameter.

E. Set SET ALLOW_SNAPSHOT_ISOLATION to ON.

F. Set SET XACT_ABORT to ON.

G. Execute the ALTER TABLE T1 SET (LOCK_ESCALATION = AUTO); statement.

H. Use the OUTPUT parameters.

Correct Answer: E
Snapshot isolation enhances concurrency for OLTP applications.
Once snapshot isolation is enabled, updated row versions for each transaction are maintained in tempdb. A unique transaction sequence number identifies each transaction, and these unique numbers are recorded for each row version. The transaction works with the most recent row versions having a sequence number before the sequence number of the transaction. Newer row versions created after the transaction has begun are ignored by the transaction.
References:
https://docs.microsoft.com/en-us/dotnet/framework/data/adonet/sql/snapshot-isolation-in-sql-server


3.

You have several real-time applications that constantly update data in a database. The applications run more than 400 transactions per second that insert and update new metrics from sensors.
A new web dashboard is released to present the data from the sensors. Engineers report that the applications take longer than expected to commit updates.
You need to change the dashboard queries to improve concurrency and to support reading uncommitted data.
What should you do?

A. Use the NOLOCK option.

B. Execute the DBCC UPDATEUSAGE statement.

C. Use the max worker threads option.

D. Use a table-valued parameter.

E. Set SET ALLOW_SNAPSHOT_ISOLATION to ON.

F. Set SET XACT_ABORT to ON.

G. Execute the ALTER TABLE T1 SET (LOCK_ESCALATION = AUTO); statement.

H. Use the OUTPUT parameters.

Correct Answer: A
The NOLOCK hint allows SQL to read data from tables by ignoring any locks and therefore not being blocked by other processes.
This can improve query performance, but also introduces the possibility of dirty reads.
Incorrect Answers:
F: When SET XACT_ABORT is ON, if a Transact-SQL statement raises a run-time error, the entire transaction is terminated and rolled back.
G: DISABLE, not AUTO, would be better.
There are two more lock escalation modes: AUTO and DISABLE.
The AUTO mode enables lock escalation for partitioned tables only for the locked partition. For non-partitioned tables it works like TABLE.
The DISABLE mode removes the lock escalation capability for the table and that is important when concurrency issues are more important than memory needs for specific tables.
Note: SQL Server's locking mechanism uses memory resources to maintain locks. In situations where the number of row or page locks increases to a level that decreases the server's memory resources to a minimal level, SQL Server's locking strategy converts these locks to entire table locks, thus freeing memory held by the many single row or page locks to one table lock. This process is called lock escalation, which frees memory, but reduces table concurrency.
References:
https://www.mssqltips.com/sqlservertip/2470/understanding-the-sql-server-nolock-hint/


4.

You have multiple stored procedures inside a transaction.
You need to ensure that all the data modified by the transaction is rolled back if a stored procedure causes a deadlock or times out.
What should you do?

A. Use the NOLOCK option.

B. Execute the DBCC UPDATEUSAGE statement.

C. Use the max worker threads option.

D. Use a table-valued parameter.

E. Set SET ALLOW_SNAPSHOT_ISOLATION to ON.

F. Set SET XACT_ABORT to ON.

G. Execute the ALTER TABLE T1 SET (LOCK_ESCALATION = AUTO); statement.

H. Use the OUTPUT parameters.

Correct Answer: F
When SET XACT_ABORT is ON, if a Transact-SQL statement raises a run-time error, the entire transaction is terminated and rolled back.
References:
https://docs.microsoft.com/en-us/sql/t-sql/statements/set-xact-abort-transact-sql?view=sql-server-2017


5.

You have a nonpartitioned table that has a single dimension. The table is named dim.Products.Projections.
The table is queried frequently by several line-of-business applications. The data is updated frequently throughout the day by two processes.
Users report that when they query data from dim.Products.Projections, the responses are slower than expected. The issue occurs when a large number of rows are being updated.
You need to prevent the updates from slowing down the queries.
What should you do?

A. Use the NOLOCK option.

B. Execute the DBCC UPDATEUSAGE statement.

C. Use the max worker threads option.

D. Use a table-valued parameter.

E. Set SET ALLOW_SNAPSHOT_ISOLATION to ON.

Correct Answer: A
The NOLOCK hint allows SQL to read data from tables by ignoring any locks and therefore not being blocked by other processes.
This can improve query performance, but also introduces the possibility of dirty reads.
References:
https://www.mssqltips.com/sqlservertip/2470/understanding-the-sql-server-nolock-hint/

 


6.

...ここにあったバックグラウンド等は関係なさそうなので割愛してます。

You are analyzing the performance of the database environment. You discover that locks that are held for a long period of time as the reports are generated.
You need to generate the reports more quickly. The database must not use additional resources.
What should you do?

A. Update the transaction level of the report query session to READPAST.

B. Modify the report queries to use the UNION statement to combine the results of two or more queries.

C. Set the READ_COMMITTED_SNAPSHOT database option to ON.

D. Update the transaction level of the report query session to READ UNCOMMITTED.

E. Apply a nonclustered index to all tables used in the report queries.

Correct Answer: D
Transactions running at the READ UNCOMMITTED level do not issue shared locks to prevent other transactions from modifying data read by the current transaction. This is the least restrictive of the isolation levels.
References:
https://technet.microsoft.com/en-us/library/ms173763(v=sql.105).aspx


6.

You have a reporting application that uses a table named Table1. You deploy a new batch update process to perform updates to Table1.
The environment is configured with the following properties:
✑ The database is configured with the default isolation setting.
✑ The application and process use the default transaction handling.
You observe the application cannot access any rows that are in use by the process.
You have the following requirements:
✑ Ensure the application is not blocked by the process.
✑ Ensure the application has a consistent view of the data
✑ Ensure the application does not read dirty data.
You need to resolve the issue and meet the requirements with the least amount of administrative effort.
What should you do?

A. Enable the database for the ALLOW_SNAPSHOT_ISOLATION isolation level. Modify the application for the SERIALIZABLE isolation level.

B. Enable the database for the READ_COMITTED_SNAPSHOT isolation level.

C. Enable the application for the WITH (NOLOCK) hint.

D. Enable the database for the ALLOW_SNAPSHOT_ISOLATION isolation level. Modify the application and the update process for the SNAPSHOT isolation level.

Correct Answer: B
Snapshot isolation must be enabled by setting the ALLOW_SNAPSHOT_ISOLATION ON database option before it is used in transactions. This activates the mechanism for storing row versions in the temporary database (tempdb).
READ COMMITTED is the default isolation level for SQL Server. It prevents dirty reads by specifying that statements cannot read data values that have been modified but not yet committed by other transactions. Other transactions can still modify, insert, or delete data between executions of individual statements within the current transaction, resulting in non-repeatable reads, or "phantom" data.


Incorrect Answers:
A: SERIALIZABLE is the most restrictive isolation level, because it locks entire ranges of keys and holds the locks until the transaction is complete. It encompasses REPEATABLE READ and adds the restriction that other transactions cannot insert new rows into ranges that have been read by the transaction until the transaction is complete.
References:
https://docs.microsoft.com/en-us/dotnet/framework/data/adonet/sql/snapshot-isolation-in-sql-server


7.

You are creating the following two stored procedures:
✑ A natively-compiled stored procedure
✑ An interpreted stored procedure that accesses both disk-based and memory-optimized tables
Both stored procedures run within transactions.
You need to ensure that cross-container transactions are possible.
Which setting or option should you use?

A. the SET TRANSACTION_READ_COMMITTED isolation level for the connection

B. the SERIALIZABLE table hint on disk-based tables

C. the SET MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT=ON option for the database

D. the SET MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT=OFF option for the database

Correct Answer: C
Provide a supported isolation level for the memory-optimized table using a table hint, such as WITH (SNAPSHOT). The need for the WITH (SNAPSHOT) hint can be avoided through the use of the database option MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT. When this option is set to ON, access to a memory- optimized table under a lower isolation level is automatically elevated to SNAPSHOT isolation.
Incorrect Answers:
B: Accessing memory optimized tables using the READ COMMITTED isolation level is supported only for autocommit transactions. It is not supported for explicit or implicit transactions.
References:
https://docs.microsoft.com/en-us/sql/relational-databases/in-memory-oltp/transactions-with-memory-optimized-tables?view=sql-server-2017


8.

You are designing a stored procedure for a database named DB1.
The following requirements must be met during the entire execution of the stored procedure:
The stored procedure must only read changes that are persisted to the database.
SELECT statements within the stored procedure should only show changes to the data that are made by
the stored procedure.
You need to configure the transaction isolation level for the stored procedure.
Which Transact-SQL statement or statements should you run?


A. SET TRANSACTION ISOLATION LEVEL READ UNCOMMITED
ALTER DATABASE DB1 SET READ_COMMITED_SNAPSHOT ON


B. SET TRANSACTION ISOLATION LEVEL READ COMMITED
ALTER DATABASE DB1 SET READ_COMMITED_SNAPSHOT OFF

C. SET TRANSACTION ISOLATION LEVEL SERIALIZABLE

D. SET TRANSACTION ISOLATION LEVEL READ UNCOMMITED

ALTER DATABASE SET READ_COMMITED_SNAPSHOT OFF9.

Correct Answer: B
Explanation
Explanation/Reference:
READ COMMITTED specifies that statements cannot read data that has been modified but not committed
by other transactions. This prevents dirty reads. Data can be changed by other transactions between
individual statements within the current transaction, resulting in nonrepeatable reads or phantom data. This
option is the SQL Server default.
Incorrect Answers:
A, D: READ UNCOMMITTED specifies that statements can read rows that have been modified by other
transactions but not yet committed.
References: https://docs.microsoft.com/en-us/dotnet/framework/data/adonet/sql/snapshot-isolation-in-sqlserver


9.

You manage a database that supports an Internet of Things (IoS) solution. The database records metrics from over 100 million devices every minute. The database requires 99.995% uptime.
The database uses a table named Checkins that is 100 gigabytes (GB) in size. The Checkins table stores metrics from the devices. The database also has a table named Archive that stores four terabytes (TB) of data. You use stored procedures for all access to the tables.
You observe that the wait type PAGELATCH_IO causes large amounts of blocking.
You need to resolve the blocking issues while minimizing downtime for the database.
Which two actions should you perform? Each correct answer presents part of the solution.

A. Convert all stored procedures that access the Checkins table to natively compiled procedures.

B. Convert the Checkins table to an In-Memory OLTP table.

C. Convert all tables to clustered columnstore indexes.

D. Convert the Checkins table to a clustered columnstore index.

Correct Answer: AB
Natively compiled stored procedures are Transact-SQL stored procedures compiled to native code that access memory-optimized tables. Natively compiled stored procedures allow for efficient execution of the queries and business logic in the stored procedure.
SQL Server In-Memory OLTP helps improve performance of OLTP applications through efficient, memory-optimized data access, native compilation of business logic, and lock- and latch free algorithms. The In-Memory OLTP feature includes memory-optimized tables and table types, as well as native compilation of
Transact-SQL stored procedures for efficient access to these tables.
References:
https://docs.microsoft.com/en-us/sql/relational-databases/in-memory-oltp/natively-compiled-stored-procedures https://docs.microsoft.com/en-us/sql/relational-databases/in-memory-oltp/memory-optimized-tables


10.

You have a table that stores transactions partitioned by year. Users occasionally experience performance issues when they access the table. The table is locked exclusively when the records are updated. You need to prevent exclusive locks on the table and maintain data integrity.
What should you do?

A. Set the LOCK_EXCALATION property to DISABLE

B. Set the DATA_COMPRESSION property to ROW at the partition level

C. Set the LOCK_EXCALATION property to AUTO

D. Set the LOCK_EXCALATION property to TABLE

E. Set the DATA_COMPRESSION property to PAGE at the partition level

Correct Answer: C
The default lock escalation mode is called TABLE, it implements SQL Server's lock escalation on all types of tables whether partitioned or not partitioned.
There are two more lock escalation modes: AUTO and DISABLE.
✑ The AUTO mode enables lock escalation for partitioned tables only for the locked partition. For non-partitioned tables it works like TABLE.
✑ The DISABLE mode removes the lock escalation capability for the table and that is important when concurrency issues are more important than memory needs for specific tables.
References:
https://www.mssqltips.com/sqlservertip/4359/altering-lock-escalation-for-sql-server-tables/
 


 


MCSA SQL Server 2016 /70-762/似た者同士・TRIGGER&Procedure

2020-01-09 10:10:48 | プログラム 勉強

【TRIGGERとProcedure】

1.

You are developing an application to track customer sales.
You need to create a database object that meets the following requirements:
- Return a value of 0 if data inserted successfully into the Customers table.
- Return a value of 1 if data is not inserted successfully into the Customers table.
- Support logic that is written by using managed code.
What should you create?

A. extended procedure

B. CLR procedure

C. user-defined procedure

D. DML trigger

E. DDL trigger

F. scalar-valued function

G. table-valued function

Correct Answer: D C

Triggerはreturnできません!
DML triggers is a special type of stored procedure that automatically takes effect when a data manipulation language (DML) event takes place that affects the table or view defined in the trigger. DML events include INSERT, UPDATE, or DELETE statements.DML triggers can be used to enforce business rules and data integrity, query other tables, and include complex Transact-SQL statements.
A CLR trigger is a type of DDL trigger. A CLR Trigger can be either an AFTER or INSTEAD OF trigger. A CLR trigger canalso be a DDL trigger. Instead of executing a Transact-SQL stored procedure, a CLR trigger executes one or more methods written in managed code that are members of an assembly created in the .NET Framework and uploaded in SQL Server.
References:
https://msdn.microsoft.com/en-us/library/ms178110.aspx


2.

You are developing an application to track customer sales.
You need to create a database object that meets the following requirements:
- Return a value of 0 if data inserted successfully into the Customers table.
- Return a value of 1 if data is not inserted successfully into the Customers table.
- Support TRY"¦CATCH error handling
- Be written by using Transact-SQL statements.
What should you create?

A. extended procedure

B. CLR procedure

C. user-defined procedure

D. DML trigger

E. scalar-valued function

F. table-valued function

Correct Answer: C

Triggerはreturnできません!


3.

You are developing an application to track customer sales.
You need to create a database object that meets the following requirements:
✑ Launch when table data is modified.
✑ Evaluate the state of a table before and after a data modification and take actions based on the difference.
✑ Prevent malicious or incorrect table data operations.
✑ Prevent changes that violate referential integrity by cancelling the attempted data modification.
✑ Run managed code packaged in an assembly that is created in the Microsoft.NET Framework and located into Microsoft SQL Server.
What should you create?

A. extended procedure

B. CLR procedure

C. user-defined procedure

D. DML trigger

E. scalar-valued function

F. table-valued function

G. DDL trigger

Correct Answer: B
You can create a database object inside SQL Server that is programmed in an assembly created in the Microsoft .NET Framework common language runtime
(CLR). Database objects that can leverage the rich programmingmodel provided by the CLR include DML triggers, DDL triggers, stored procedures, functions, aggregate functions, and types.
Creating a CLR trigger (DML or DDL) in SQL Server involves the following steps:
Define the trigger as a class in a .NETFramework-supported language. For more information about how to program triggers in the CLR, see CLR Triggers. Then, compile the class to build an assembly in the .NET Framework using the appropriate language compiler.
Register the assembly in SQL Server using the CREATE ASSEMBLY statement. For more information about assemblies in SQL Server, see Assemblies
(Database Engine).
Create the trigger that references the registered assembly.
References:
https://msdn.microsoft.com/en-us/library/ms179562.aspx


4.

You are developing and application to track customer sales.
You need to return the sum of orders that have been finalized, given a specified order identifier. This value will be used in other Transact-SQL statements.
You need to create a database object.
What should you create?

A. extended procedure

B. CLR procedure

C. user-defined procedure

D. DML trigger

E. scalar-valued function

F. table-valued function

Correct Answer: E
User-defined scalar functions return a single data value of the type defined in the RETURNS clause.
References:
https://technet.microsoft.com/en-us/library/ms177499(v=sql.105).aspx


5.

You are developing and application to track customer sales.
You need to create an object that meet the following requirements:
- Run managed code packaged in an assembly that was created in the Microsoft.NET Framework and uploaded in Microsoft SQL Server.
- Run within a transaction and roll back if a future occurs.
- Run when a table is created or modified.
What should you create?

A. extended procedure

B. CLR procedure

C. user-defined procedure

D. DML trigger

E. scalar-valued function

F. table-valued function

G. DDL trigger

Correct Answer: B
The common language runtime (CLR) is the heart of the Microsoft .NET Framework andprovides the execution environment for all .NET Framework code. Code that runs within the CLR is referred to as managed code.
With the CLR hosted in Microsoft SQL Server (called CLR integration), you can author stored procedures, triggers, user-defined functions, user-defined types, and user-defined aggregates in managed code. Because managed code compiles to native code prior to execution, you can achieve significant performance increases in some scenarios.


6.

You are developing an application to track customer sales. You create tables to support the application. You need to create a database object that meets the following data entry requirements:

What should you create?

A. extended procedure

B. CLR procedure

C. user-defined procedure

D. DML trigger

E. DDL trigger

F. scalar-valued function

G. table-valued function

Correct Answer: C
References:
https://msdn.microsoft.com/en-us/library/ms345075.aspx


 7.

...いろいろと関係なさそうなのは前後割愛してます。

You must modify the Orders table to meet the following requirements:
✑ Create new rows in the table without granting INSERT permissions to the table.
✑ Notify the sales person who places an order whether or not the order was completed.

You need to modify the environment to meet the requirements for the Orders table.
What should you create?

A. an AFTER UPDATE trigger

B. a user-defined function

C. a stored procedure with output parameters

D. an INSTEAD OF INSERT trigger

Correct Answer: D
From Question: You must modify the Orders table to meet the following requirements:
✑ Create new rows in the table without granting INSERT permissions to the table.
✑ Notify the sales person who places an order whether or not the order was completed.
References:
https://docs.microsoft.com/en-us/sql/t-sql/statements/create-trigger-transact-sql?view=sql-server-2017


8.

You have a view that includes an aggregate.
You must be able to change the values of columns in the view. The changes must be reflected in the tables that the view uses.
You need to ensure that you can update the view.
What should you create?

A. a DML trigger

B. a schema-bound view

C. a stored procedure

D. a DDL trigger

Correct Answer: A B

もしAggregate in Function-集計が存在するファンクションをつかったVIEWがUpdatableViewではないVIEWだったら、DMLトリガー(Instead OF Trigger)で変更するしかない。UpdatableViewはこちらで確認 

https://docs.microsoft.com/en-us/sql/t-sql/statements/create-view-transact-sql?view=sql-server-2017

よって、『You have a view that includes an aggregate.』の部分で、『an aggregate』がUDFのファンクションにあれば、『A. a DML trigger』が正しい。なければ、UPDATABLE VIEWなので、『B. a schema-bound view』が正しい。

When you create a view, you must obey the following conditions in order to make the view updateable:
✑ The columns being modified in the view have to directly reference the underlying table columns. As a consequence, the view's columns being modified cannot be the result of an aggregate function or a computed column.
The only way to make data changes on a non-updateable view is by using INSTEAD OF DML triggers. This way you can use procedural code to overcome the limitation.
References:
https://www.mssqltips.com/sqlservertip/5984/sql-server-trigger-on-view-example/