InternetUnicodeHTMLCSSScalable Vector Graphics (SVG)Extensible Markup Language (xml) ASP.Net TOCASP.NetMiscellaneous FeatureASP.NET ScriptingASP.NET Run-time Object System.IO Namespace ADO.NETADO.NET OverviewSecuring ADO.NETADO.NET Data Type MappingsADO.NET Retrieving and Modifying DataADO.NET Entity Data ModelADO.NET OracleADO.NET Entity Framework Draft for Information Only
Content
ADO.NET SQL Server
ADO.NET SQL ServerThis section describes features and behaviors that are specific to the .NET Framework Data Provider for SQL Server (System.Data.SqlClient). System.Data.SqlClient provides access to versions of SQL Server, which encapsulates database-specific protocols. The functionality of the data provider is designed to be similar to that of the .NET Framework data providers for OLE DB, ODBC, and Oracle. System.Data.SqlClient includes a tabular data stream (TDS) parser to communicate directly with SQL Server. Note To use the .NET Framework Data Provider for SQL Server, an application must reference the System.Data.SqlClient namespace. In This Section
SQL Server Security
SQL Server Data Types and ADO.NET
SQL Server Binary and Large-Value Data
SQL Server Data Operations in ADO.NET
SQL Server Features and ADO.NET
LINQ to SQL For complete documentation of the SQL Server Database Engine, see SQL Server Books Online for the version of SQL Server you are using. See also
SQL Server SecuritySQL Server has many features that support creating secure database applications. Common security considerations, such as data theft or vandalism, apply regardless of the version of SQL Server you are using. Data integrity should also be considered as a security issue. If data is not protected, it is possible that it could become worthless if ad hoc data manipulation is permitted and the data is inadvertently or maliciously modified with incorrect values or deleted entirely. In addition, there are often legal requirements that must be adhered to, such as the correct storage of confidential information. Storing some kinds of personal data is proscribed entirely, depending on the laws that apply in a particular jurisdiction. Each version of SQL Server has different security features, as does each version of Windows, with later versions having enhanced functionality over earlier ones. It is important to understand that security features alone cannot guarantee a secure database application. Each database application is unique in its requirements, execution environment, deployment model, physical location, and user population. Some applications that are local in scope may need only minimal security whereas other local applications or applications deployed over the Internet may require stringent security measures and ongoing monitoring and evaluation. The security requirements of a SQL Server database application should be considered at design time, not as an afterthought. Evaluating threats early in the development cycle gives you the opportunity to mitigate potential damage wherever a vulnerability is detected. Even if the initial design of an application is sound, new threats may emerge as the system evolves. By creating multiple lines of defense around your database, you can minimize the damage inflicted by a security breach. Your first line of defense is to reduce the attack surface area by never to granting more permissions than are absolutely necessary. The topics in this section briefly describe the security features in SQL Server that are relevant for developers, with links to relevant topics in SQL Server Books Online and other resources that provide more detailed coverage. In This Section
Overview of SQL Server Security
Application Security Scenarios in SQL Server
SQL Server Express Security Related Sections
Security Center for SQL Server Database Engine and Azure SQL Database
Security Considerations for a SQL Server Installation See alsoOverview of SQL Server SecurityA defense-in-depth strategy, with overlapping layers of security, is the best way to counter security threats. SQL Server provides a security architecture that is designed to allow database administrators and developers to create secure database applications and counter threats. Each version of SQL Server has improved on previous versions of SQL Server with the introduction of new features and functionality. However, security does not ship in the box. Each application is unique in its security requirements. Developers need to understand which combination of features and functionality are most appropriate to counter known threats, and to anticipate threats that may arise in the future. A SQL Server instance contains a hierarchical collection of entities, starting with the server. Each server contains multiple databases, and each database contains a collection of securable objects. Every SQL Server securable has associated permissions that can be granted to a principal, which is an individual, group or process granted access to SQL Server. The SQL Server security framework manages access to securable entities through authentication and authorization.
The topics in this section cover SQL Server security fundamentals, providing links to the complete documentation in the relevant version of SQL Server Books Online. In This Section
Authentication in SQL Server
Server and Database Roles in SQL Server
Ownership and User-Schema Separation in SQL Server
Authorization and Permissions in SQL Server
Data Encryption in SQL Server
CLR Integration Security in SQL Server See also
Authentication in SQL ServerSQL Server supports two authentication modes, Windows authentication mode and mixed mode.
Important We recommend using Windows authentication wherever possible. Windows authentication uses a series of encrypted messages to authenticate users in SQL Server. When SQL Server logins are used, SQL Server login names and encrypted passwords are passed across the network, which makes them less secure. With Windows authentication, users are already logged onto Windows and do not have to log on separately to SQL Server. The following SqlConnection.ConnectionString specifies Windows authentication without requiring users to provide a user name or password. "Server=MSSQL1;Database=AdventureWorks;Integrated Security=true; Note Logins are distinct from database users. You must map logins or Windows groups to database users or roles in a separate operation. You then grant permissions to users or roles to access database objects. Authentication ScenariosWindows authentication is usually the best choice in the following situations:
SQL Server logins are often used in the following situations:
Note Specifying Windows authentication does not disable SQL Server logins. Use the ALTER LOGIN DISABLE Transact-SQL statement to disable highly-privileged SQL Server logins. Login TypesSQL Server supports three types of logins:
Note SQL Server provides logins created from certificates or asymmetric keys that are used only for code signing. They cannot be used to connect to SQL Server. Mixed Mode AuthenticationIf you must use mixed mode authentication, you must create SQL Server logins, which are stored in SQL Server. You then have to supply the SQL Server user name and password at run time. Important SQL Server installs with a SQL Server login named sa (an abbreviation of "system administrator"). Assign a strong password to the sa login and do not use the sa login in your application. The sa login maps to the sysadmin fixed server role, which has irrevocable administrative credentials on the whole server. There are no limits to the potential damage if an attacker gains access as a system administrator. All members of the Windows BUILTIN\Administrators group (the local administrator's group) are members of the sysadmin role by default, but can be removed from that role. SQL Server provides Windows password policy mechanisms for SQL Server logins when it is running on Windows Server 2003 or later versions. Password complexity policies are designed to deter brute force attacks by increasing the number of possible passwords. SQL Server can apply the same complexity and expiration policies used in Windows Server 2003 to passwords used inside SQL Server. Important Concatenating connection strings from user input can leave you vulnerable to a connection string injection attack. Use the SqlConnectionStringBuilder to create syntactically valid connection strings at run time. For more information, see Connection String Builders. External ResourcesFor more information, see the following resources.
See also
Server and Database Roles in SQL ServerAll versions of SQL Server use role-based security, which allows you to assign permissions to a role, or group of users, instead of to individual users. Fixed server and fixed database roles have a fixed set of permissions assigned to them. Fixed Server RolesFixed server roles have a fixed set of permissions and server-wide scope. They are intended for use in administering SQL Server and the permissions assigned to them cannot be changed. Logins can be assigned to fixed server roles without having a user account in a database. Important The sysadmin fixed server role encompasses all other roles and has unlimited scope. Do not add principals to this role unless they are highly trusted. sysadmin role members have irrevocable administrative privileges on all server databases and resources. Be selective when you add users to fixed server roles. For example, the bulkadmin role allows users to insert the contents of any local file into a table, which could jeopardize data integrity. See SQL Server Books Online for the complete list of fixed server roles and permissions. Fixed Database RolesFixed database roles have a pre-defined set of permissions that are designed to allow you to easily manage groups of permissions. Members of the db_owner role can perform all configuration and maintenance activities on the database. For more information about SQL Server predefined roles, see the following resources.
Database Roles and UsersLogins must be mapped to database user accounts in order to work with database objects. Database users can then be added to database roles, inheriting any permission sets associated with those roles. All permissions can be granted. You must also consider the public role, the dbo user account, and the guest account when you design security for your application. The public RoleThe public role is contained in every database, which includes system databases. It cannot be dropped and you cannot add or remove users from it. Permissions granted to the public role are inherited by all other users and roles because they belong to the public role by default. Grant public only the permissions you want all users to have. The dbo User AccountThe dbo, or database owner, is a user account that has implied permissions to perform all activities in the database. Members of the sysadmin fixed server role are automatically mapped to dbo. Note dbo is also the name of a schema, as discussed in Ownership and User-Schema Separation in SQL Server. The dbo user account is frequently confused with the db_owner fixed database role. The scope of db_owner is a database; the scope of sysadmin is the whole server. Membership in the db_owner role does not confer dbo user privileges. The guest User AccountAfter a user has been authenticated and allowed to log in to an instance of SQL Server, a separate user account must exist in each database the user has to access. Requiring a user account in each database prevents users from connecting to an instance of SQL Server and accessing all the databases on a server. The existence of a guest user account in the database circumvents this requirement by allowing a login without a database user account to access a database. The guest account is a built-in account in all versions of SQL Server. By default, it is disabled in new databases. If it is enabled, you can disable it by revoking its CONNECT permission by executing the Transact-SQL REVOKE CONNECT FROM GUEST statement. Important Avoid using the guest account; all logins without their own database permissions obtain the database permissions granted to this account. If you must use the guest account, grant it minimum permissions. For more information about SQL Server logins, users and roles, see the following resources.
See also
Ownership and User-Schema Separation in SQL ServerA core concept of SQL Server security is that owners of objects have irrevocable permissions to administer them. You cannot remove privileges from an object owner, and you cannot drop users from a database if they own objects in it. User-Schema SeparationUser-schema separation allows for more flexibility in managing database object permissions. A schema is a named container for database objects, which allows you to group objects into separate namespaces. For example, the AdventureWorks sample database contains schemas for Production, Sales, and HumanResources. The four-part naming syntax for referring to objects specifies the schema name. Server.Database.DatabaseSchema.DatabaseObject Schema Owners and PermissionsSchemas can be owned by any database principal, and a single principal can own multiple schemas. You can apply security rules to a schema, which are inherited by all objects in the schema. Once you set up access permissions for a schema, those permissions are automatically applied as new objects are added to the schema. Users can be assigned a default schema, and multiple database users can share the same schema. By default, when developers create objects in a schema, the objects are owned by the security principal that owns the schema, not the developer. Object ownership can be transferred with ALTER AUTHORIZATION Transact-SQL statement. A schema can also contain objects that are owned by different users and have more granular permissions than those assigned to the schema, although this is not recommended because it adds complexity to managing permissions. Objects can be moved between schemas, and schema ownership can be transferred between principals. Database users can be dropped without affecting schemas. Built-In SchemasSQL Server ships with ten pre-defined schemas that have the same names as the built-in database users and roles. These exist mainly for backward compatibility. You can drop the schemas that have the same names as the fixed database roles if you do not need them. You cannot drop the following schemas:
If you drop them from the model database, they will not appear in new databases. Note The sys and INFORMATION_SCHEMA schemas are reserved for system objects. You cannot create objects in these schemas and you cannot drop them. The dbo SchemaThe dbo schema is the default schema for a newly created database. The dbo schema is owned by the dbo user account. By default, users created with the CREATE USER Transact-SQL command have dbo as their default schema. Users who are assigned the dbo schema do not inherit the permissions of the dbo user account. No permissions are inherited from a schema by users; schema permissions are inherited by the database objects contained in the schema. Note When database objects are referenced by using a one-part name, SQL Server first looks in the user's default schema. If the object is not found there, SQL Server looks next in the dbo schema. If the object is not in the dbo schema, an error is returned. External ResourcesFor more information on object ownership and schemas, see the following resources.
See also
Authorization and Permissions in SQL ServerWhen you create database objects, you must explicitly grant permissions to make them accessible to users. Every securable object has permissions that can be granted to a principal using permission statements. The Principle of Least PrivilegeDeveloping an application using a least-privileged user account (LUA) approach is an important part of a defensive, in-depth strategy for countering security threats. The LUA approach ensures that users follow the principle of least privilege and always log on with limited user accounts. Administrative tasks are broken out using fixed server roles, and the use of the sysadmin fixed server role is severely restricted. Always follow the principle of least privilege when granting permissions to database users. Grant the minimum permissions necessary to a user or role to accomplish a given task. Important Developing and testing an application using the LUA approach adds a degree of difficulty to the development process. It is easier to create objects and write code while logged on as a system administrator or database owner than it is using a LUA account. However, developing applications using a highly privileged account can obfuscate the impact of reduced functionality when least privileged users attempt to run an application that requires elevated permissions in order to function correctly. Granting excessive permissions to users in order to reacquire lost functionality can leave your application vulnerable to attack. Designing, developing and testing your application logged on with a LUA account enforces a disciplined approach to security planning that eliminates unpleasant surprises and the temptation to grant elevated privileges as a quick fix. You can use a SQL Server login for testing even if your application is intended to deploy using Windows authentication. Role-Based PermissionsGranting permissions to roles rather than to users simplifies security administration. Permission sets that are assigned to roles are inherited by all members of the role. It is easier to add or remove users from a role than it is to recreate separate permission sets for individual users. Roles can be nested; however, too many levels of nesting can degrade performance. You can also add users to fixed database roles to simplify assigning permissions. You can grant permissions at the schema level. Users automatically inherit permissions on all new objects created in the schema; you do not need to grant permissions as new objects are created. Permissions Through Procedural CodeEncapsulating data access through modules such as stored procedures and user-defined functions provides an additional layer of protection around your application. You can prevent users from directly interacting with database objects by granting permissions only to stored procedures or functions while denying permissions to underlying objects such as tables. SQL Server achieves this by ownership chaining. Permission StatementsThe three Transact-SQL permission statements are described in the following table.
Note Members of the sysadmin fixed server role and object owners cannot be denied permissions. Ownership ChainsSQL Server ensures that only principals that have been granted permission can access objects. When multiple database objects access each other, the sequence is known as a chain. When SQL Server is traversing the links in the chain, it evaluates permissions differently than it would if it were accessing each item separately. When an object is accessed through a chain, SQL Server first compares the object's owner to the owner of the calling object (the previous link in the chain). If both objects have the same owner, permissions on the referenced object are not checked. Whenever an object accesses another object that has a different owner, the ownership chain is broken and SQL Server must check the caller's security context. Procedural Code and Ownership ChainingSuppose that a user is granted execute permissions on a stored procedure that selects data from a table. If the stored procedure and the table have the same owner, the user doesn't need to be granted any permissions on the table and can even be denied permissions. However, if the stored procedure and the table have different owners, SQL Server must check the user's permissions on the table before allowing access to the data. Note Ownership chaining does not apply in the case of dynamic SQL statements. To call a procedure that executes an SQL statement, the caller must be granted permissions on the underlying tables, leaving your application vulnerable to SQL Injection attack. SQL Server provides new mechanisms, such as impersonation and signing modules with certificates, that do not require granting permissions on the underlying tables. These can also be used with CLR stored procedures. External ResourcesFor more information, see the following resources.
See also
Data Encryption in SQL ServerSQL Server provides functions to encrypt and decrypt data using a certificate, asymmetric key, or symmetric key. It manages all of these in an internal certificate store. The store uses an encryption hierarchy that secures certificates and keys at one level with the layer above it in the hierarchy. This feature area of SQL Server is called Secret Storage. The fastest mode of encryption supported by the encryption functions is symmetric key encryption. This mode is suitable for handling large volumes of data. The symmetric keys can be encrypted by certificates, passwords or other symmetric keys. Keys and AlgorithmsSQL Server supports several symmetric key encryption algorithms, including DES, Triple DES, RC2, RC4, 128-bit RC4, DESX, 128-bit AES, 192-bit AES, and 256-bit AES. The algorithms are implemented using the Windows Crypto API. Within the scope of a database connection, SQL Server can maintain multiple open symmetric keys. An open key is retrieved from the store and is available for decrypting data. When a piece of data is decrypted, there is no need to specify the symmetric key to use. Each encrypted value contains the key identifier (key GUID) of the key used to encrypt it. The engine matches the encrypted byte stream to an open symmetric key, if the correct key has been decrypted and is open. This key is then used to perform decryption and return the data. If the correct key is not open, NULL is returned. For an example that shows how to work with encrypted data in a database, see Encrypt a Column of Data. External ResourcesFor more information on data encryption, see the following resources.
See also
CLR Integration Security in SQL ServerMicrosoft SQL Server provides the integration of the common language runtime (CLR) component of the .NET Framework. CLR integration allows you to write stored procedures, triggers, user-defined types, user-defined functions, user-defined aggregates, and streaming table-valued functions, using any .NET Framework language, such as Microsoft Visual Basic .NET or Microsoft Visual C#. The CLR supports a security model called code access security (CAS) for managed code. In this model, permissions are granted to assemblies based on evidence supplied by the code in metadata. SQL Server integrates the user-based security model of SQL Server with the code access-based security model of the CLR. External ResourcesFor more information on CLR integration with SQL Server, see the following resources.
See also
Application Security Scenarios in SQL ServerThere is no single correct way to create a secure SQL Server client application. Every application is unique in its requirements, deployment environment, and user population. An application that is reasonably secure when it is initially deployed can become less secure over time. It is impossible to predict with any accuracy what threats may emerge in the future. SQL Server, as a product, has evolved over many versions to incorporate the latest security features that enable developers to create secure database applications. However, security doesn't come in the box; it requires continual monitoring and updating. Common ThreatsDevelopers need to understand security threats, the tools provided to counter them, and how to avoid self-inflicted security holes. Security can best be thought of as a chain, where a break in any one link compromises the strength of the whole. The following list includes some common security threats that are discussed in more detail in the topics in this section. SQL InjectionSQL Injection is the process by which a malicious user enters Transact-SQL statements instead of valid input. If the input is passed directly to the server without being validated and if the application inadvertently executes the injected code, then the attack has the potential to damage or destroy data. You can thwart SQL Server injection attacks by using stored procedures and parameterized commands, avoiding dynamic SQL, and restricting permissions on all users. Elevation of PrivilegeElevation of privilege attacks occur when a user is able to assume the privileges of a trusted account, such as an owner or administrator. Always run under least-privileged user accounts and assign only needed permissions. Avoid using administrative or owner accounts for executing code. This limits the amount of damage that can occur if an attack succeeds. When performing tasks that require additional permissions, use procedure signing or impersonation only for the duration of the task. You can sign stored procedures with certificates or use impersonation to temporarily assign permissions. Probing and Intelligent ObservationA probing attack can use error messages generated by an application to search for security vulnerabilities. Implement error handling in all procedural code to prevent SQL Server error information from being returned to the end user. AuthenticationA connection string injection attack can occur when using SQL Server logins if a connection string based on user input is constructed at run time. If the connection string is not checked for valid keyword pairs, an attacker can insert extra characters, potentially accessing sensitive data or other resources on the server. Use Windows authentication wherever possible. If you must use SQL Server logins, use the SqlConnectionStringBuilder to create and validate connection strings at run time. PasswordsMany attacks succeed because an intruder was able to obtain or guess a password for a privileged user. Passwords are your first line of defense against intruders, so setting strong passwords is essential to the security of your system. Create and enforce password policies for mixed mode authentication. Always assign a strong password to the sa account, even when using Windows Authentication. In This Section
Managing Permissions with Stored Procedures in SQL Server
Writing Secure Dynamic SQL in SQL Server
Signing Stored Procedures in SQL Server
Customizing Permissions with Impersonation in SQL Server
Granting Row-Level Permissions in SQL Server
Creating Application Roles in SQL Server
Enabling Cross-Database Access in SQL Server See also
Managing Permissions with Stored Procedures in SQL ServerOne method of creating multiple lines of defense around your database is to implement all data access using stored procedures or user-defined functions. You revoke or deny all permissions to underlying objects, such as tables, and grant EXECUTE permissions on stored procedures. This effectively creates a security perimeter around your data and database objects. Stored Procedure BenefitsStored procedures have the following benefits:
Stored Procedure ExecutionStored procedures take advantage of ownership chaining to provide access to data so that users do not need to have explicit permission to access database objects. An ownership chain exists when objects that access each other sequentially are owned by the same user. For example, a stored procedure can call other stored procedures, or a stored procedure can access multiple tables. If all objects in the chain of execution have the same owner, then SQL Server only checks the EXECUTE permission for the caller, not the caller's permissions on other objects. Therefore you need to grant only EXECUTE permissions on stored procedures; you can revoke or deny all permissions on the underlying tables. Best PracticesSimply writing stored procedures isn't enough to adequately secure your application. You should also consider the following potential security holes.
External ResourcesFor more information, see the following resources.
See also
Writing Secure Dynamic SQL in SQL ServerSQL Injection is the process by which a malicious user enters Transact-SQL statements instead of valid input. If the input is passed directly to the server without being validated and if the application inadvertently executes the injected code, the attack has the potential to damage or destroy data. Any procedure that constructs SQL statements should be reviewed for injection vulnerabilities because SQL Server will execute all syntactically valid queries that it receives. Even parameterized data can be manipulated by a skilled and determined attacker. If you use dynamic SQL, be sure to parameterize your commands, and never include parameter values directly into the query string. Anatomy of a SQL Injection AttackThe injection process works by prematurely terminating a text string and appending a new command. Because the inserted command may have additional strings appended to it before it is executed, the malefactor terminates the injected string with a comment mark "--". Subsequent text is ignored at execution time. Multiple commands can be inserted using a semicolon (;) delimiter. As long as injected SQL code is syntactically correct, tampering cannot be detected programmatically. Therefore, you must validate all user input and carefully review code that executes constructed SQL commands in the server that you are using. Never concatenate user input that is not validated. String concatenation is the primary point of entry for script injection. Here are some helpful guidelines:
Dynamic SQL StrategiesExecuting dynamically created SQL statements in your procedural code breaks the ownership chain, causing SQL Server to check the permissions of the caller against the objects being accessed by the dynamic SQL. SQL Server has methods for granting users access to data using stored procedures and user-defined functions that execute dynamic SQL.
EXECUTE ASThe EXECUTE AS clause replaces the permissions of the caller with that of the user specified in the EXECUTE AS clause. Nested stored procedures or triggers execute under the security context of the proxy user. This can break applications that rely on row-level security or require auditing. Some functions that return the identity of the user return the user specified in the EXECUTE AS clause, not the original caller. Execution context is reverted to the original caller only after execution of the procedure or when a REVERT statement is issued. Certificate SigningWhen a stored procedure that has been signed with a certificate executes, the permissions granted to the certificate user are merged with those of the caller. The execution context remains the same; the certificate user does not impersonate the caller. Signing stored procedures requires several steps to implement. Each time the procedure is modified, it must be re-signed. Cross Database AccessCross-database ownership chaining does not work in cases where dynamically created SQL statements are executed. You can work around this in SQL Server by creating a stored procedure that accesses data in another database and signing the procedure with a certificate that exists in both databases. This gives users access to the database resources used by the procedure without granting them database access or permissions. External ResourcesFor more information, see the following resources.
See also
Signing Stored Procedures in SQL ServerA digital signature is a data digest encrypted with the private key of the signer. The private key ensures that the digital signature is unique to its bearer or owner. You can sign stored procedures, functions (except for inline table-valued functions), triggers, and assemblies. You can sign a stored procedure with a certificate or an asymmetric key. This is designed for scenarios when permissions cannot be inherited through ownership chaining or when the ownership chain is broken, such as dynamic SQL. You can then create a user mapped to the certificate, granting the certificate user permissions on the objects the stored procedure needs to access. You can also create a login mapped to the same certificate, and then grant any necessary server-level permissions to that login, or add the login to one or more of the fixed server roles. This is designed to avoid enabling the TRUSTWORTHY database setting for scenarios in which higher level permissions are needed. When the stored procedure is executed, SQL Server combines the permissions of the certificate user and/or login with those of the caller. Unlike the EXECUTE AS clause, it does not change the execution context of the procedure. Built-in functions that return login and user names return the name of the caller, not the certificate user name. Creating CertificatesWhen you sign a stored procedure with a certificate or asymmetric key, a data digest consisting of the encrypted hash of the stored procedure code, along with the execute-as user, is created using the private key. At run time, the data digest is decrypted with the public key and compared with the hash value of the stored procedure. Changing the execute-as user invalidates the hash value so that the digital signature no longer matches. Modifying the stored procedure drops the signature entirely, which prevents someone who does not have access to the private key from changing the stored procedure code. In either case, you must re-sign the procedure each time you change the code or the execute-as user. There are two required steps involved in signing a module:
Once the module has been signed, one or more principals needs to be created in order to hold the additional permissions that should be associated with the certificate. If the module needs additional database-level permissions:
If the module needs additional server-level permissions:
Note A certificate cannot grant permissions to a user that has had permissions revoked using the DENY statement. DENY always takes precedence over GRANT, preventing the caller from inheriting permissions granted to the certificate user. External ResourcesFor more information, see the following resources.
See also
Customizing Permissions with Impersonation in SQL ServerMany applications use stored procedures to access data, relying on ownership chaining to restrict access to base tables. You can grant EXECUTE permissions on stored procedures, revoking or denying permissions on the base tables. SQL Server does not check the permissions of the caller if the stored procedure and tables have the same owner. However, ownership chaining doesn't work if objects have different owners or in the case of dynamic SQL. You can use the EXECUTE AS clause in a stored procedure when the caller doesn't have permissions on the referenced database objects. The effect of the EXECUTE AS clause is that the execution context is switched to the proxy user. All code, as well as any calls to nested stored procedures or triggers, executes under the security context of the proxy user. Execution context is reverted to the original caller only after execution of the procedure or when a REVERT statement is issued. Context Switching with the EXECUTE AS StatementThe Transact-SQL EXECUTE AS statement allows you to switch the execution context of a statement by impersonating another login or database user. This is a useful technique for testing queries and procedures as another user. EXECUTE AS LOGIN = 'loginName'; EXECUTE AS USER = 'userName'; You must have IMPERSONATE permissions on the login or user you are impersonating. This permission is implied for sysadmin for all databases, and db_owner role members in databases that they own. Granting Permissions with the EXECUTE AS ClauseYou can use the EXECUTE AS clause in the definition header of a stored procedure, trigger, or user-defined function (except for inline table-valued functions). This causes the procedure to execute in the context of the user name or keyword specified in the EXECUTE AS clause. You can create a proxy user in the database that is not mapped to a login, granting it only the necessary permissions on the objects accessed by the procedure. Only the proxy user specified in the EXECUTE AS clause must have permissions on all objects accessed by the module. Note Some actions, such as TRUNCATE TABLE, do not have grantable permissions. By incorporating the statement within a procedure and specifying a proxy user who has ALTER TABLE permissions, you can extend the permissions to truncate the table to callers who have only EXECUTE permissions on the procedure. The context specified in the EXECUTE AS clause is valid for the duration of the procedure, including nested procedures and triggers. Context reverts to the caller when execution is complete or the REVERT statement is issued. There are three steps involved in using the EXECUTE AS clause in a procedure.
CREATE USER proxyUser WITHOUT LOGIN
CREATE PROCEDURE [procName] WITH EXECUTE AS 'proxyUser' AS ... Note Applications that require auditing can break because the original security context of the caller is not retained. Built-in functions that return the identity of the current user, such as SESSION_USER, USER, or USER_NAME, return the user associated with the EXECUTE AS clause, not the original caller. Using EXECUTE AS with REVERTYou can use the Transact-SQL REVERT statement to revert to the original execution context. The optional clause, WITH NO REVERT COOKIE = @variableName, allows you switch the execution context back to the caller if the @variableName variable contains the correct value. This allows you to switch the execution context back to the caller in environments where connection pooling is used. Because the value of @variableName is known only to the caller of the EXECUTE AS statement, the caller can guarantee that the execution context cannot be changed by the end user that invokes the application. When the connection is closed, it is returned to the pool. For more information on connection pooling in ADO.NET, see SQL Server Connection Pooling (ADO.NET). Specifying the Execution ContextIn addition to specifying a user, you can also use EXECUTE AS with any of the following keywords.
See also
Granting Row-Level Permissions in SQL ServerIn some scenarios, there is a requirement to control access to data at a more granular level than what simply granting, revoking, or denying permissions provides. For example, a hospital database application may require individual Doctors to be restricted to accessing information related to only their patients. Similar requirements exist in many environments, including finance, law, government, and military applications. To help address these scenarios, SQL Server 2016 provides a Row-Level Security feature that simplifies and centralizes row-level access logic in a security policy. For earlier versions of SQL Server, similar functionality can be achieved by using views to enact row-level filtering. Implementing Row-level FilteringRow-level filtering is used for applications storing information in a single table like in the hospital example above. To implement row-level filtering each row has a column that defines a differentiating parameter, such as a user name, label or other identifier. You create either a security policy or a view on the table, which filters the rows that the user can access. You then create parameterized stored procedures, which control the types of queries the user can execute. The following example describes how to configure row-level filtering based on a user or login name:
See also
Creating Application Roles in SQL ServerApplication roles provide a way to assign permissions to an application instead of a database role or user. Users can connect to the database, activate the application role, and assume the permissions granted to the application. The permissions granted to the application role are in force for the duration of the connection. Important Application roles are activated when a client application supplies an application role name and a password in the connection string. They present a security vulnerability in a two-tier application because the password must be stored on the client computer. In a three-tier application, you can store the password so that it cannot be accessed by users of the application. Application Role FeaturesApplication roles have the following features:
The Principle of Least PrivilegeApplication roles should be granted only required permissions in case the password is compromised. Permissions to the public role should be revoked in any database using an application role. Disable the guest account in any database you do not want callers of the application role to have access to. Application Role EnhancementsThe execution context can be switched back to the original caller after activating an application role, removing the need to disable connection pooling. The sp_setapprole procedure has a new option that creates a cookie, which contains context information about the caller. You can revert the session by calling the sp_unsetapprole procedure, passing it the cookie. Application Role AlternativesApplication roles depend on the security of a password, which presents a potential security vulnerability. Passwords may be exposed by being embedded in application code or saved on disk. You may want to consider the following alternatives.
External ResourcesFor more information, see the following resources.
See also
Enabling Cross-Database Access in SQL ServerCross-database ownership chaining occurs when a procedure in one database depends on objects in another database. A cross-database ownership chain works in the same way as ownership chaining within a single database, except that an unbroken ownership chain requires that all the object owners are mapped to the same login account. If the source object in the source database and the target objects in the target databases are owned by the same login account, SQL Server does not check permissions on the target objects. Off By DefaultOwnership chaining across databases is turned off by default. Microsoft recommends that you disable cross-database ownership chaining because it exposes you to the following security risks:
Enabling Cross-database Ownership ChainingCross-database ownership chaining should only be enabled in environments where you can fully trust highly-privileged users. It can be configured during setup for all databases, or selectively for specific databases using the Transact-SQL commands sp_configure and ALTER DATABASE. To selectively configure cross-database ownership chaining, use sp_configure to turn it off for the server. Then use the ALTER DATABASE command with SET DB_CHAINING ON to configure cross-database ownership chaining for only the databases that require it. The following sample turns on cross-database ownership chaining for all databases: EXECUTE sp_configure 'show advanced', 1; RECONFIGURE; EXECUTE sp_configure 'cross db ownership chaining', 1; RECONFIGURE; The following sample turns on cross-database ownership chaining for specific databases: ALTER DATABASE Database1 SET DB_CHAINING ON; ALTER DATABASE Database2 SET DB_CHAINING ON; Dynamic SQLCross-database ownership chaining does not work in cases where dynamically created SQL statements are executed unless the same user exists in both databases. You can work around this in SQL Server by creating a stored procedure that accesses data in another database and signing the procedure with a certificate that exists in both databases. This gives users access to the database resources used by the procedure without granting them database access or permissions. External ResourcesFor more information, see the following resources.
See also
SQL Server Express SecurityMicrosoft SQL Server Express Edition (SQL Server Express) is based on Microsoft SQL Server, and supports most of the features of the database engine. It is designed so that nonessential features and network connectivity are off by default. This reduces the surface area available for attack by a malicious user. SQL Server Express is usually installed as a named instance. The default name of the instance is SQLExpress. A named instance is identified by the network name of the computer plus the instance name that you specify during installation. Network AccessFor security reasons, networking protocols are disabled by default in SQL Server Express. This prevents attacks from outside users that might compromise the computer that hosts the instance of SQL Server Express. You must explicitly enable network connectivity and start the SQL Server Browser service to connect to a SQL Server Express instance from another computer. Once network connectivity is enabled, a SQL Server Express instance has the same security requirements as the other editions of SQL Server. User InstancesA user instance is a separate instance of the SQL Server Express database engine that is generated by a parent instance of SQL Server Express. The primary goal of a user instance is to allow users who are running Windows under a least-privilege user account to have system administrator (sysadmin) privileges on the SQL Server Express instance on their local computer. User instances are not intended for users who are system administrators on their own computers. A user instance is generated from a primary instance of SQL Server or SQL Server Express on behalf of a user. It runs as a user process under the Windows security context of the user, not as a service. SQL Server logins are disallowed; only Windows logins are supported. This prevents software executing on a user instance from making system-wide changes that the user would not have permissions to make. A user instance is also known as a child or client instance, and is sometimes referred to by using the RANU acronym ("run as normal user"). Each user instance is isolated from its parent instance and from other user instances running on the same computer. Databases installed on user instances are opened in single-user mode only; multiple users cannot connect to them. Replication, distributed queries and remote connections are disabled for user instances. When connected to a user instance, users do not have any special privileges on the parent SQL Server Express instance. External ResourcesFor more information about SQL Server Express, see the following resources.
See also
SQL Server Data Types and ADO.NETSQL Server and the .NET Framework are based on different type systems, which can result in potential data loss. To preserve data integrity, the .NET Framework Data Provider for SQL Server (System.Data.SqlClient) provides typed accessor methods for working with SQL Server data. You can use the enumerations in the SqlDbType classes to specify SqlParameter data types. For more information and a table that describes the data type mappings between SQL Server and .NET Framework data types, see SQL Server Data Type Mappings. SQL Server 2008 introduces new data types that are designed to meet business needs to work with date and time, structured, semi-structured, and unstructured data. These are documented in SQL Server 2008 Books Online. The SQL Server data types that are available for use in your application depends on the version of SQL Server that you are using. For more information, see the relevant version of SQL Server Books Online in the following table. SQL Server Books Online In This Section
SqlTypes and the DataSet
Handling Null Values
Comparing GUID and uniqueidentifier Values
Date and Time Data
Large UDTs
XML Data in SQL Server Reference
DataSet
System.Data.SqlTypes
SqlDbType
DbType See also
SqlTypes and the DataSetADO.NET 2.0 introduced enhanced type support for the DataSet through the System.Data.SqlTypes namespace. The types in System.Data.SqlTypes are designed to provide data types with the same semantics and precision as the data types in a SQL Server database. Each data type in System.Data.SqlTypes has an equivalent data type in SQL Server, with the same underlying data representation. Using System.Data.SqlTypes directly in a DataSet confers several benefits when working with SQL Server data types. System.Data.SqlTypes supports the same semantics as SQL Server native data types. Specifying one of the System.Data.SqlTypes in the definition of a DataColumn eliminates the loss of precision that can occur when converting decimal or numeric data types to one of the common language runtime (CLR) data types. ExampleThe following example creates a DataTable object, explicitly defining the DataColumn data types by using System.Data.SqlTypes instead of CLR types. The code fills the DataTable with data from the Sales.SalesOrderDetail table in the AdventureWorks database in SQL Server. The output displayed in the console window shows the data type of each column, and the values retrieved from SQL Server. C#static private void GetSqlTypesAW(string connectionString) { // Create a DataTable and specify a SqlType // for each column. DataTable table = new DataTable(); DataColumn icolumnolumn = table.Columns.Add("SalesOrderID", typeof(SqlInt32)); DataColumn priceColumn = table.Columns.Add("UnitPrice", typeof(SqlMoney)); DataColumn totalColumn = table.Columns.Add("LineTotal", typeof(SqlDecimal)); DataColumn columnModifiedDate = table.Columns.Add("ModifiedDate", typeof(SqlDateTime)); // Open a connection to SQL Server and fill the DataTable // with data from the Sales.SalesOrderDetail table // in the AdventureWorks sample database. using (SqlConnection connection = new SqlConnection(connectionString)) { string queryString = "SELECT TOP 5 SalesOrderID, UnitPrice, LineTotal, ModifiedDate " + "FROM Sales.SalesOrderDetail WHERE LineTotal < @LineTotal"; // Create the SqlCommand. SqlCommand command = new SqlCommand(queryString, connection); // Create the SqlParameter and assign a value. SqlParameter parameter = new SqlParameter("@LineTotal", SqlDbType.Decimal); parameter.Value = 1.5; command.Parameters.Add(parameter); // Open the connection and load the data. connection.Open(); SqlDataReader reader = command.ExecuteReader(CommandBehavior.CloseConnection); table.Load(reader); // Close the SqlDataReader. reader.Close(); } // Display the SqlType of each column. Console.WriteLine("Data Types:"); foreach (DataColumn column in table.Columns) { Console.WriteLine(" {0} -- {1}", column.ColumnName, column.DataType.UnderlyingSystemType); } // Display the value for each row. Console.WriteLine("Values:"); foreach (DataRow row in table.Rows) { Console.Write(" {0}, ", row["SalesOrderID"]); Console.Write(" {0}, ", row["UnitPrice"]); Console.Write(" {0}, ", row["LineTotal"]); Console.Write(" {0} ", row["ModifiedDate"]); Console.WriteLine(); } } See also
Handling Null ValuesA null value in a relational database is used when the value in a column is unknown or missing. A null is neither an empty string (for character or datetime data types) nor a zero value (for numeric data types). The ANSI SQL-92 specification states that a null must be the same for all data types, so that all nulls are handled consistently. The System.Data.SqlTypes namespace provides null semantics by implementing the INullable interface. Each of the data types in System.Data.SqlTypes has its own IsNull property and a Null value that can be assigned to an instance of that data type. Note The .NET Framework version 2.0 introduced support for nullable types, which allow programmers to extend a value type to represent all values of the underlying type. These CLR nullable types represent an instance of the Nullable structure. This capability is especially useful when value types are boxed and unboxed, providing enhanced compatibility with object types. CLR nullable types are not intended for storage of database nulls because an ANSI SQL null does not behave the same way as a null reference (or Nothing in Visual Basic). For working with database ANSI SQL null values, use System.Data.SqlTypes nulls rather than Nullable. For more information on working with CLR nullable types in Visual Basic see Nullable Value Types, and for C# see Using Nullable Types. Nulls and Three-Valued LogicAllowing null values in column definitions introduces three-valued logic into your application. A comparison can evaluate to one of three conditions:
Because null is considered to be unknown, two null values compared to each other are not considered to be equal. In expressions using arithmetic operators, if any of the operands is null, the result is null as well. Nulls and SqlBooleanComparison between any System.Data.SqlTypes will return a SqlBoolean. The IsNull function for each SqlType returns a SqlBoolean and can be used to check for null values. The following truth tables show how the AND, OR, and NOT operators function in the presence of a null value. (T=true, F=false, and U=unknown, or null.)
Understanding the ANSI_NULLS OptionSystem.Data.SqlTypes provides the same semantics as when the ANSI_NULLS option is set on in SQL Server. All arithmetic operators (+, -, *, /, %), bitwise operators (~, &, |), and most functions return null if any of the operands or arguments is null, except for the property IsNull. The ANSI SQL-92 standard does not support columnName = NULL in a WHERE clause. In SQL Server, the ANSI_NULLS option controls both default nullability in the database and evaluation of comparisons against null values. If ANSI_NULLS is turned on (the default), the IS NULL operator must be used in expressions when testing for null values. For example, the following comparison always yields unknown when ANSI_NULLS is on: colname > NULL Comparison to a variable containing a null value also yields unknown: colname > @MyVariable Use the IS NULL or IS NOT NULL predicate to test for a null value. This can add complexity to the WHERE clause. For example, the TerritoryID column in the AdventureWorks Customer table allows null values. If a SELECT statement is to test for null values in addition to others, it must include an IS NULL predicate: SELECT CustomerID, AccountNumber, TerritoryID FROM AdventureWorks.Sales.Customer WHERE TerritoryID IN (1, 2, 3) OR TerritoryID IS NULL If you set ANSI_NULLS off in SQL Server, you can create expressions that use the equality operator to compare to null. However, you can't prevent different connections from setting null options for that connection. Using IS NULL to test for null values always works, regardless of the ANSI_NULLS settings for a connection. Setting ANSI_NULLS off is not supported in a DataSet, which always follows the ANSI SQL-92 standard for handling null values in System.Data.SqlTypes. Assigning Null ValuesNull values are special, and their storage and assignment semantics differ across different type systems and storage systems. A Dataset is designed to be used with different type and storage systems. This section describes the null semantics for assigning null values to a DataColumn in a DataRow across the different type systems.
DBNull.Value
SqlType.Null
null
derivedUdt.Null Note The Nullable<T> or Nullable structure is not currently supported in the DataSet. Multiple Column (Row) AssignmentDataTable.Add, DataTable.LoadDataRow, or other APIs that accept an ItemArray that gets mapped to a row, map 'null' to the DataColumn's default value. If an object in the array contains DbNull.Value or its strongly typed counterpart, the same rules as described above are applied. In addition, the following rules apply for an instance of DataRow.["columnName"] null assignments:
Assigning Null ValuesThe default value for any System.Data.SqlTypes instance is null. Nulls in System.Data.SqlTypes are type-specific and cannot be represented by a single value, such as DbNull. Use the IsNull property to check for nulls. Null values can be assigned to a DataColumn as shown in the following code example. You can directly assign null values to SqlTypes variables without triggering an exception. ExampleThe following code example creates a DataTable with two columns defined as SqlInt32 and SqlString. The code adds one row of known values, one row of null values and then iterates through the DataTable, assigning the values to variables and displaying the output in the console window. C#static private void WorkWithSqlNulls() { DataTable table = new DataTable(); // Specify the SqlType for each column. DataColumn idColumn = table.Columns.Add("ID", typeof(SqlInt32)); DataColumn descColumn = table.Columns.Add("Description", typeof(SqlString)); // Add some data. DataRow nRow = table.NewRow(); nRow["ID"] = 123; nRow["Description"] = "Side Mirror"; table.Rows.Add(nRow); // Add null values. nRow = table.NewRow(); nRow["ID"] = SqlInt32.Null; nRow["Description"] = SqlString.Null; table.Rows.Add(nRow); // Initialize variables to use when // extracting the data. SqlBoolean isColumnNull = false; SqlInt32 idValue = SqlInt32.Zero; SqlString descriptionValue = SqlString.Null; // Iterate through the DataTable and display the values. foreach (DataRow row in table.Rows) { // Assign values to variables. Note that you // do not have to test for null values. idValue = (SqlInt32)row["ID"]; descriptionValue = (SqlString)row["Description"]; // Test for null value in ID column. isColumnNull = idValue.IsNull; // Display variable values in console window. Console.Write("isColumnNull={0}, ID={1}, Description={2}", isColumnNull, idValue, descriptionValue); Console.WriteLine(); } This example displays the following results: isColumnNull=False, ID=123, Description=Side Mirror isColumnNull=True, ID=Null, Description=Null Comparing Null Values with SqlTypes and CLR TypesWhen comparing null values, it is important to understand the difference between the way the Equals method evaluates null values in System.Data.SqlTypes as compared with the way it works with CLR types. All of the System.Data.SqlTypesEquals methods use database semantics for evaluating null values: if either or both of the values is null, the comparison yields null. On the other hand, using the CLR Equals method on two System.Data.SqlTypes will yield true if both are null. This reflects the difference between using an instance method such as the CLR String.Equals method, and using the static/shared method, SqlString.Equals. The following example demonstrates the difference in results between the SqlString.Equals method and the String.Equals method when each is passed a pair of null values and then a pair of empty strings. C#private static void CompareNulls() { // Create two new null strings. SqlString a = new SqlString(); SqlString b = new SqlString(); // Compare nulls using static/shared SqlString.Equals. Console.WriteLine("SqlString.Equals shared/static method:"); Console.WriteLine(" Two nulls={0}", SqlStringEquals(a, b)); // Compare nulls using instance method String.Equals. Console.WriteLine(); Console.WriteLine("String.Equals instance method:"); Console.WriteLine(" Two nulls={0}", StringEquals(a, b)); // Make them empty strings. a = ""; b = ""; // When comparing two empty strings (""), both the shared/static and // the instance Equals methods evaluate to true. Console.WriteLine(); Console.WriteLine("SqlString.Equals shared/static method:"); Console.WriteLine(" Two empty strings={0}", SqlStringEquals(a, b)); Console.WriteLine(); Console.WriteLine("String.Equals instance method:"); Console.WriteLine(" Two empty strings={0}", StringEquals(a, b)); } private static string SqlStringEquals(SqlString string1, SqlString string2) { // SqlString.Equals uses database semantics for evaluating nulls. string returnValue = SqlString.Equals(string1, string2).ToString(); return returnValue; } private static string StringEquals(SqlString string1, SqlString string2) { // String.Equals uses CLR type semantics for evaluating nulls. string returnValue = string1.Equals(string2).ToString(); return returnValue; } } The code produces the following output: SqlString.Equals shared/static method: Two nulls=Null String.Equals instance method: Two nulls=True SqlString.Equals shared/static method: Two empty strings=True String.Equals instance method: Two empty strings=True See alsoComparing GUID and uniqueidentifier ValuesThe globally unique identifier (GUID) data type in SQL Server is represented by the uniqueidentifier data type, which stores a 16-byte binary value. A GUID is a binary number, and its main use is as an identifier that must be unique in a network that has many computers at many sites. GUIDs can be generated by calling the Transact-SQL NEWID function, and is guaranteed to be unique throughout the world. For more information, see uniqueidentifier (Transact-SQL). Working with SqlGuid ValuesBecause GUIDs values are long and obscure, they are not meaningful for users. If randomly generated GUIDs are used for key values and you insert a lot of rows, you get random I/O into your indexes, which can negatively impact performance. GUIDs are also relatively large when compared to other data types. In general we recommend using GUIDs only for very narrow scenarios for which no other data type is suitable. Comparing GUID ValuesComparison operators can be used with uniqueidentifier values. However, ordering is not implemented by comparing the bit patterns of the two values. The only operations that are allowed against a uniqueidentifier value are comparisons (=, <>, <, >, <=, >=) and checking for NULL (IS NULL and IS NOT NULL). No other arithmetic operators are allowed. Both Guid and SqlGuid have a CompareTo method for comparing different GUID values. However, System.Guid.CompareTo and SqlTypes.SqlGuid.CompareTo are implemented differently. SqlGuid implements CompareTo using SQL Server behavior, in the last six bytes of a value are most significant. Guid evaluates all 16 bytes. The following example demonstrates this behavioral difference. The first section of code displays unsorted Guid values, and the second section of code shows the sorted Guid values. The third section shows the sorted SqlGuid values. The output is displayed beneath the code listing. C#private static void WorkWithGuids() { // Create an ArrayList and fill it with Guid values. ArrayList guidList = new ArrayList(); guidList.Add(new Guid("3AAAAAAA-BBBB-CCCC-DDDD-2EEEEEEEEEEE")); guidList.Add(new Guid("2AAAAAAA-BBBB-CCCC-DDDD-1EEEEEEEEEEE")); guidList.Add(new Guid("1AAAAAAA-BBBB-CCCC-DDDD-3EEEEEEEEEEE")); // Display the unsorted Guid values. Console.WriteLine("Unsorted Guids:"); foreach (Guid guidValue in guidList) { Console.WriteLine(" {0}", guidValue); } Console.WriteLine(""); // Sort the Guids. guidList.Sort(); // Display the sorted Guid values. Console.WriteLine("Sorted Guids:"); foreach (Guid guidSorted in guidList) { Console.WriteLine(" {0}", guidSorted); } Console.WriteLine(""); // Create an ArrayList of SqlGuids. ArrayList sqlGuidList = new ArrayList(); sqlGuidList.Add(new SqlGuid("3AAAAAAA-BBBB-CCCC-DDDD-2EEEEEEEEEEE")); sqlGuidList.Add(new SqlGuid("2AAAAAAA-BBBB-CCCC-DDDD-1EEEEEEEEEEE")); sqlGuidList.Add(new SqlGuid("1AAAAAAA-BBBB-CCCC-DDDD-3EEEEEEEEEEE")); // Sort the SqlGuids. The unsorted SqlGuids are in the same order // as the unsorted Guid values. sqlGuidList.Sort(); // Display the sorted SqlGuids. The sorted SqlGuid values are ordered // differently than the Guid values. Console.WriteLine("Sorted SqlGuids:"); foreach (SqlGuid sqlGuidValue in sqlGuidList) { Console.WriteLine(" {0}", sqlGuidValue); } } This example produces the following results. Unsorted Guids: 3aaaaaaa-bbbb-cccc-dddd-2eeeeeeeeeee 2aaaaaaa-bbbb-cccc-dddd-1eeeeeeeeeee 1aaaaaaa-bbbb-cccc-dddd-3eeeeeeeeeee Sorted Guids: 1aaaaaaa-bbbb-cccc-dddd-3eeeeeeeeeee 2aaaaaaa-bbbb-cccc-dddd-1eeeeeeeeeee 3aaaaaaa-bbbb-cccc-dddd-2eeeeeeeeeee Sorted SqlGuids: 2aaaaaaa-bbbb-cccc-dddd-1eeeeeeeeeee 3aaaaaaa-bbbb-cccc-dddd-2eeeeeeeeeee 1aaaaaaa-bbbb-cccc-dddd-3eeeeeeeeeee See alsoDate and Time DataSQL Server 2008 introduces new data types for handling date and time information. The new data types include separate types for date and time, and expanded data types with greater range, precision, and time-zone awareness. Starting with the .NET Framework version 3.5 Service Pack (SP) 1, the .NET Framework Data Provider for SQL Server (System.Data.SqlClient) provides full support for all the new features of the SQL Server 2008 Database Engine. You must install the .NET Framework 3.5 SP1 (or later) to use these new features with SqlClient. Versions of SQL Server earlier than SQL Server 2008 only had two data types for working with date and time values: datetime and smalldatetime. Both of these data types contain both the date value and a time value, which makes it difficult to work with only date or only time values. Also, these data types only support dates that occur after the introduction of the Gregorian calendar in England in 1753. Another limitation is that these older data types are not time-zone aware, which makes it difficult to work with data that originates from multiple time zones. Complete documentation for SQL Server data types is available in SQL Server Books Online. The following table lists the version-specific entry-level topics for date and time data. SQL Server Books Online Date/Time Data Types Introduced in SQL Server 2008The following table describes the new date and time data types.
Note For more information about using the Type System Version keyword, see ConnectionString. Date Format and Date OrderHow SQL Server parses date and time values depends not only on the type system version and server version, but also on the server's default language and format settings. A date string that works for the date formats of one language might be unrecognizable if the query is executed by a connection that uses a different language and date format setting. The Transact-SQL SET LANGUAGE statement implicitly sets the DATEFORMAT that determines the order of the date parts. You can use the SET DATEFORMAT Transact-SQL statement on a connection to disambiguate date values by ordering the date parts in MDY, DMY, YMD, YDM, MYD, or DYM order. If you do not specify any DATEFORMAT for the connection, SQL Server uses the default language associated with the connection. For example, a date string of '01/02/03' would be interpreted as MDY (January 2, 2003) on a server with a language setting of United States English, and as DMY (February 1, 2003) on a server with a language setting of British English. The year is determined by using SQL Server's cutoff year rule, which defines the cutoff date for assigning the century value. For more information, see two digit year cutoff Option in SQL Server Books Online. Note The YDM date format is not supported when converting from a string format to date, time, datetime2, or datetimeoffset. For more information about how SQL Server interprets date and time data, see Using Date and Time Data in SQL Server 2008 Books Online. Date/Time Data Types and ParametersThe following enumerations have been added to SqlDbType to support the new date and time data types.
You can specify the data type of a SqlParameter by using one of the preceding SqlDbType enumerations. Note You cannot set the DbType property of a SqlParameter to SqlDbType.Date. You can also specify the type of a SqlParameter generically by setting the DbType property of a SqlParameter object to a particular DbType enumeration value. The following enumeration values have been added to DbType to support the datetime2 and datetimeoffset data types:
These new enumerations supplement the Date, Time, and DateTime enumerations, which existed in earlier versions of the .NET Framework. The .NET Framework data provider type of a parameter object is inferred from the .NET Framework type of the value of the parameter object, or from the DbType of the parameter object. No new System.Data.SqlTypes data types have been introduced to support the new date and time data types. The following table describes the mappings between the SQL Server 2008 date and time data types and the CLR data types.
SqlParameter PropertiesThe following table describes SqlParameter properties that are relevant to date and time data types.
Note Time values that are less than zero or greater than or equal to 24 hours will throw an ArgumentException. Creating ParametersYou can create a SqlParameter object by using its constructor, or by adding it to a SqlCommandParameters collection by calling the Add method of the SqlParameterCollection. The Add method will take as input either constructor arguments or an existing parameter object. The next sections in this topic provide examples of how to specify date and time parameters. For additional examples of working with parameters, see Configuring Parameters and Parameter Data Types and DataAdapter Parameters. Date ExampleThe following code fragment demonstrates how to specify a date parameter. C#SqlParameter parameter = new SqlParameter(); parameter.ParameterName = "@Date"; parameter.SqlDbType = SqlDbType.Date; parameter.Value = "2007/12/1"; Time ExampleThe following code fragment demonstrates how to specify a time parameter. C#SqlParameter parameter = new SqlParameter(); parameter.ParameterName = "@time"; parameter.SqlDbType = SqlDbType.Time; parameter.Value = DateTime.Parse("23:59:59").TimeOfDay; Datetime2 ExampleThe following code fragment demonstrates how to specify a datetime2 parameter with both the date and time parts. C#SqlParameter parameter = new SqlParameter(); parameter.ParameterName = "@Datetime2"; parameter.SqlDbType = SqlDbType.DateTime2; parameter.Value = DateTime.Parse("1666-09-02 1:00:00"); DateTimeOffSet ExampleThe following code fragment demonstrates how to specify a DateTimeOffSet parameter with a date, a time, and a time zone offset of 0. C#SqlParameter parameter = new SqlParameter(); parameter.ParameterName = "@DateTimeOffSet"; parameter.SqlDbType = SqlDbType.DateTimeOffSet; parameter.Value = DateTimeOffset.Parse("1666-09-02 1:00:00+0"); AddWithValueYou can also supply parameters by using the AddWithValue method of a SqlCommand, as shown in the following code fragment. However, the AddWithValue method does not allow you to specify the DbType or SqlDbType for the parameter. C#command.Parameters.AddWithValue( "@date", DateTimeOffset.Parse("16660902")); The @date parameter could map to a date, datetime, or datetime2 data type on the server. When working with the new datetime data types, you must explicitly set the parameter's SqlDbType property to the data type of the instance. Using Variant or implicitly supplying parameter values can cause problems with backward compatibility with the datetime and smalldatetime data types. The following table shows which SqlDbTypes are inferred from which CLR types:
Retrieving Date and Time DataThe following table describes methods that are used to retrieve SQL Server 2008 date and time values.
Note The new date and time SqlDbTypes are not supported for code that is executing in-process in SQL Server. An exception will be raised if one of these types is passed to the server. Specifying Date and Time Values as LiteralsYou can specify date and time data types by using a variety of different literal string formats, which SQL Server then evaluates at run time, converting them to internal date/time structures. SQL Server recognizes date and time data that is enclosed in single quotation marks ('). The following examples demonstrate some formats:
Note You can find complete documentation for all of the literal string formats and other features of the date and time data types in SQL Server Books Online. Time values that are less than zero or greater than or equal to 24 hours will throw an ArgumentException. Resources in SQL Server 2008 Books OnlineFor more information about working with date and time values in SQL Server 2008, see the following resources in SQL Server 2008 Books Online.
See also
Large UDTsUser-defined types (UDTs) allow a developer to extend the server's scalar type system by storing common language runtime (CLR) objects in a SQL Server database. UDTs can contain multiple elements and can have behaviors, unlike the traditional alias data types, which consist of a single SQL Server system data type. Note You must install the .NET Framework 3.5 SP1 (or later) to take advantage of the enhanced SqlClient support for large UDTs. Previously, UDTs were restricted to a maximum size of 8 kilobytes. In SQL Server 2008, this restriction has been removed for UDTs that have a format of UserDefined. For the complete documentation for user-defined types, see the version of SQL Server Books Online for the version of SQL Server you are using. SQL Server Books Online Retrieving UDT Schemas Using GetSchemaThe GetSchema method of SqlConnection returns database schema information in a DataTable. For more information, see SQL Server Schema Collections. GetSchemaTable Column Values for UDTsThe GetSchemaTable method of a SqlDataReader returns a DataTable that describes column metadata. The following table describes the differences in the column metadata for large UDTs between SQL Server 2005 and SQL Server 2008.
SqlDataReader ConsiderationsThe SqlDataReader has been extended beginning in SQL Server 2008 to support retrieving large UDT values. How large UDT values are processed by a SqlDataReader depends on the version of SQL Server you are using, as well as on the Type System Version specified in the connection string. For more information, see ConnectionString. The following methods of SqlDataReader will return a SqlBinary instead of a UDT when the Type System Version is set to SQL Server 2005: The following methods will return an array of Byte[] instead of a UDT when the Type System Version is set to SQL Server 2005: Note that no conversions are made for the current version of ADO.NET. Specifying SqlParametersThe following SqlParameter properties have been extended to work with large UDTs.
Retrieving Data ExampleThe following code fragment demonstrates how to retrieve large UDT data. The connectionString variable assumes a valid connection to a SQL Server database and the commandString variable assumes a valid SELECT statement with the primary key column listed first. C#using (SqlConnection connection = new SqlConnection( connectionString, commandString)) { connection.Open(); SqlCommand command = new SqlCommand(commandString); SqlDataReader reader = command.ExecuteReader(); while (reader.Read()) { // Retrieve the value of the Primary Key column. int id = reader.GetInt32(0); // Retrieve the value of the UDT. LargeUDT udt = (LargeUDT)reader[1]; // You can also use GetSqlValue and GetValue. // LargeUDT udt = (LargeUDT)reader.GetSqlValue(1); // LargeUDT udt = (LargeUDT)reader.GetValue(1); Console.WriteLine( "ID={0} LargeUDT={1}", id, udt); } reader.close } See also
XML Data in SQL ServerSQL Server exposes the functionality of SQLXML inside the .NET Framework. Developers can write applications that access XML data from an instance of SQL Server, bring the data into the .NET Framework environment, process the data, and send the updates back to SQL Server. XML data can be used in several ways in SQL Server, including data storage, and as parameter values for retrieving data. The SqlXml class in the .NET Framework provides the client-side support for working with data stored in an XML column within SQL Server. For more information, see "SQLXML Managed Classes" in SQL Server Books Online. In This Section
SQL XML Column Values
Specifying XML Values as Parameters See alsoSQL XML Column ValuesSQL Server supports the xml data type, and developers can retrieve result sets including this type using standard behavior of the SqlCommand class. An xml column can be retrieved just as any column is retrieved (into a SqlDataReader, for example) but if you want to work with the content of the column as XML, you must use an XmlReader. ExampleThe following console application selects two rows, each containing an xml column, from the Sales.Store table in the AdventureWorks database to a SqlDataReader instance. For each row, the value of the xml column is read using the GetSqlXml method of SqlDataReader. The value is stored in an XmlReader. Note that you must use GetSqlXml rather than the GetValue method if you want to set the contents to a SqlXml variable; GetValue returns the value of the xml column as a string. Note The AdventureWorks sample database is not installed by default when you install SQL Server. You can install it by running SQL Server Setup. C#// Example assumes the following directives: // using System.Data.SqlClient; // using System.Xml; // using System.Data.SqlTypes; static void GetXmlData(string connectionString) { using (SqlConnection connection = new SqlConnection(connectionString)) { connection.Open(); // The query includes two specific customers for simplicity's // sake. A more realistic approach would use a parameter // for the CustomerID criteria. The example selects two rows // in order to demonstrate reading first from one row to // another, then from one node to another within the xml column. string commandText = "SELECT Demographics from Sales.Store WHERE " + "CustomerID = 3 OR CustomerID = 4"; SqlCommand commandSales = new SqlCommand(commandText, connection); SqlDataReader salesReaderData = commandSales.ExecuteReader(); // Multiple rows are returned by the SELECT, so each row // is read and an XmlReader (an xml data type) is set to the // value of its first (and only) column. int countRow = 1; while (salesReaderData.Read()) // Must use GetSqlXml here to get a SqlXml type. // GetValue returns a string instead of SqlXml. { SqlXml salesXML = salesReaderData.GetSqlXml(0); XmlReader salesReaderXml = salesXML.CreateReader(); Console.WriteLine("-----Row " + countRow + "-----"); // Move to the root. salesReaderXml.MoveToContent(); // We know each node type is either Element or Text. // All elements within the root are string values. // For this simple example, no elements are empty. while (salesReaderXml.Read()) { if (salesReaderXml.NodeType == XmlNodeType.Element) { string elementLocalName = salesReaderXml.LocalName; salesReaderXml.Read(); Console.WriteLine(elementLocalName + ": " + salesReaderXml.Value); } } countRow = countRow + 1; } } } See alsoSpecifying XML Values as ParametersIf a query requires a parameter whose value is an XML string, developers can supply that value using an instance of the SqlXml data type. There really are no tricks; XML columns in SQL Server accept parameter values in exactly the same way as other data types. ExampleThe following console application creates a new table in the AdventureWorks database. The new table includes a column named SalesID and an XML column named SalesInfo. Note The AdventureWorks sample database is not installed by default when you install SQL Server. You can install it by running SQL Server Setup. The example prepares a SqlCommand object to insert a row in the new table. A saved file provides the XML data needed for the SalesInfo column. To create the file needed for the example to run, create a new text file in the same folder as your project. Name the file MyTestStoreData.xml. Open the file in Notepad and copy and paste the following text: XML<StoreSurvey xmlns="http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/StoreSurvey"> <AnnualSales>300000</AnnualSales> <AnnualRevenue>30000</AnnualRevenue> <BankName>International Bank</BankName> <BusinessType>BM</BusinessType> <YearOpened>1970</YearOpened> <Specialty>Road</Specialty> <SquareFeet>7000</SquareFeet> <Brands>3</Brands> <Internet>T1</Internet> <NumberEmployees>2</NumberEmployees> </StoreSurvey>C# using System; using System.Data; using System.Data.SqlClient; using System.Xml; using System.Data.SqlTypes; class Class1 { static void Main() { using (SqlConnection connection = new SqlConnection(GetConnectionString())) { connection.Open(); // Create a sample table (dropping first if it already // exists.) string commandNewTable = "IF EXISTS (SELECT * FROM dbo.sysobjects " + "WHERE id = " + "object_id(N'[dbo].[XmlDataTypeSample]') " + "AND OBJECTPROPERTY(id, N'IsUserTable') = 1) " + "DROP TABLE [dbo].[XmlDataTypeSample];" + "CREATE TABLE [dbo].[XmlDataTypeSample](" + "[SalesID] [int] IDENTITY(1,1) NOT NULL, " + "[SalesInfo] [xml])"; SqlCommand commandAdd = new SqlCommand(commandNewTable, connection); commandAdd.ExecuteNonQuery(); string commandText = "INSERT INTO [dbo].[XmlDataTypeSample] " + "([SalesInfo] ) " + "VALUES(@xmlParameter )"; SqlCommand command = new SqlCommand(commandText, connection); // Read the saved XML document as a // SqlXml-data typed variable. SqlXml newXml = new SqlXml(new XmlTextReader("MyTestStoreData.xml")); // Supply the SqlXml value for the value of the parameter. command.Parameters.AddWithValue("@xmlParameter", newXml); int result = command.ExecuteNonQuery(); Console.WriteLine(result + " row was added."); Console.WriteLine("Press Enter to continue."); Console.ReadLine(); } } private static string GetConnectionString() { // To avoid storing the connection string in your code, // you can retrieve it from a configuration file. return "Data Source=(local);Integrated Security=true;" + "Initial Catalog=AdventureWorks; "; } } See alsoSQL Server Binary and Large-Value DataSQL Server provides the max specifier, which expands the storage capacity of the varchar, nvarchar, and varbinary data types. varchar(max), nvarchar(max), and varbinary(max) are collectively called large-value data types. You can use the large-value data types to store up to 2^31-1 bytes of data. SQL Server 2008 introduces the FILESTREAM attribute, which is not a data type, but rather an attribute that can be defined on a column, allowing large-value data to be stored on the file system instead of in the database. In This Section
Modifying Large-Value (max) Data in ADO.NET
FILESTREAM Data See also
Modifying Large-Value (max) Data in ADO.NETLarge object (LOB) data types are those that exceed the maximum row size of 8 kilobytes (KB). SQL Server provides a max specifier for varchar, nvarchar, and varbinary data types to allow storage of values as large as 2^32 bytes. Table columns and Transact-SQL variables may specify varchar(max), nvarchar(max), or varbinary(max) data types. In ADO.NET, the max data types can be fetched by a DataReader, and can also be specified as both input and output parameter values without any special handling. For large varchar data types, data can be retrieved and updated incrementally. The max data types can be used for comparisons, as Transact-SQL variables, and for concatenation. They can also be used in the DISTINCT, ORDER BY, GROUP BY clauses of a SELECT statement as well as in aggregates, joins, and subqueries. The following table provides links to the documentation in SQL Server Books Online. SQL Server Books Online Large-Value Type RestrictionsThe following restrictions apply to the max data types, which do not exist for smaller data types:
Working with Large-Value Types in Transact-SQLThe Transact-SQL OPENROWSET function is a one-time method of connecting and accessing remote data. It includes all of the connection information necessary to access remote data from an OLE DB data source. OPENROWSET can be referenced in the FROM clause of a query as though it were a table name. It can also be referenced as the target table of an INSERT, UPDATE, or DELETE statement, subject to the capabilities of the OLE DB provider. The OPENROWSET function includes the BULK rowset provider, which allows you to read data directly from a file without loading the data into a target table. This enables you to use OPENROWSET in a simple INSERT SELECT statement. The OPENROWSET BULK option arguments provide significant control over where to begin and end reading data, how to deal with errors, and how data is interpreted. For example, you can specify that the data file be read as a single-row, single-column rowset of type varbinary, varchar, or nvarchar. For the complete syntax and options, see SQL Server Books Online. The following example inserts a photo into the ProductPhoto table in the AdventureWorks sample database. When using the BULK OPENROWSET provider, you must supply the named list of columns even if you aren't inserting values into every column. The primary key in this case is defined as an identity column, and may be omitted from the column list. Note that you must also supply a correlation name at the end of the OPENROWSET statement, which in this case is ThumbnailPhoto. This correlates with the column in the ProductPhoto table into which the file is being loaded. INSERT Production.ProductPhoto ( ThumbnailPhoto, ThumbnailPhotoFilePath, LargePhoto, LargePhotoFilePath) SELECT ThumbnailPhoto.*, null, null, N'tricycle_pink.gif' FROM OPENROWSET (BULK 'c:\images\tricycle.jpg', SINGLE_BLOB) ThumbnailPhoto Updating Data Using UPDATE .WRITEThe Transact-SQL UPDATE statement has new WRITE syntax for modifying the contents of varchar(max), nvarchar(max), or varbinary(max) columns. This allows you to perform partial updates of the data. The UPDATE .WRITE syntax is shown here in abbreviated form: UPDATE { <object> } SET { column_name = { .WRITE ( expression , @Offset , @Length ) } The WRITE method specifies that a section of the value of the column_name will be modified. The expression is the value that will be copied to the column_name, the @Offset is the beginning point at which the expression will be written, and the @Length argument is the length of the section in the column.
Note Neither @Offset nor @Length can be a negative number. ExampleThis Transact-SQL example updates a partial value in DocumentSummary, an nvarchar(max) column in the Document table in the AdventureWorks database. The word 'components' is replaced by the word 'features' by specifying the replacement word, the beginning location (offset) of the word to be replaced in the existing data, and the number of characters to be replaced (length). The example includes SELECT statements before and after the UPDATE statement to compare results. USE AdventureWorks; GO --View the existing value. SELECT DocumentSummary FROM Production.Document WHERE DocumentID = 3; GO -- The first sentence of the results will be: -- Reflectors are vital safety components of your bicycle. --Modify a single word in the DocumentSummary column UPDATE Production.Document SET DocumentSummary .WRITE (N'features',28,10) WHERE DocumentID = 3 ; GO --View the modified value. SELECT DocumentSummary FROM Production.Document WHERE DocumentID = 3; GO -- The first sentence of the results will be: -- Reflectors are vital safety features of your bicycle. Working with Large-Value Types in ADO.NETYou can work with large value types in ADO.NET by specifying large value types as SqlParameter objects in a SqlDataReader to return a result set, or by using a SqlDataAdapter to fill a DataSet/DataTable. There is no difference between the way you work with a large value type and its related, smaller value data type. Using GetSqlBytes to Retrieve DataThe GetSqlBytes method of the SqlDataReader can be used to retrieve the contents of a varbinary(max) column. The following code fragment assumes a SqlCommand object named cmd that selects varbinary(max) data from a table and a SqlDataReader object named reader that retrieves the data as SqlBytes. C#reader = cmd.ExecuteReader(CommandBehavior.CloseConnection); while (reader.Read()) { SqlBytes bytes = reader.GetSqlBytes(0); } Using GetSqlChars to Retrieve DataThe GetSqlChars method of the SqlDataReader can be used to retrieve the contents of a varchar(max) or nvarchar(max) column. The following code fragment assumes a SqlCommand object named cmd that selects nvarchar(max) data from a table and a SqlDataReader object named reader that retrieves the data. C#reader = cmd.ExecuteReader(CommandBehavior.CloseConnection); while (reader.Read()) { SqlChars buffer = reader.GetSqlChars(0); } Using GetSqlBinary to Retrieve DataThe GetSqlBinary method of a SqlDataReader can be used to retrieve the contents of a varbinary(max) column. The following code fragment assumes a SqlCommand object named cmd that selects varbinary(max) data from a table and a SqlDataReader object named reader that retrieves the data as a SqlBinary stream. C#reader = cmd.ExecuteReader(CommandBehavior.CloseConnection); while (reader.Read()) { SqlBinary binaryStream = reader.GetSqlBinary(0); } Using GetBytes to Retrieve DataThe GetBytes method of a SqlDataReader reads a stream of bytes from the specified column offset into a byte array starting at the specified array offset. The following code fragment assumes a SqlDataReader object named reader that retrieves bytes into a byte array. Note that, unlike GetSqlBytes, GetBytes requires a size for the array buffer. C#while (reader.Read()) { byte[] buffer = new byte[4000]; long byteCount = reader.GetBytes(1, 0, buffer, 0, 4000); } Using GetValue to Retrieve DataThe GetValue method of a SqlDataReader reads the value from the specified column offset into an array. The following code fragment assumes a SqlDataReader object named reader that retrieves binary data from the first column offset, and then string data from the second column offset. C#while (reader.Read()) { // Read the data from varbinary(max) column byte[] binaryData = (byte[])reader.GetValue(0); // Read the data from varchar(max) or nvarchar(max) column String stringData = (String)reader.GetValue(1); } Converting from Large Value Types to CLR TypesYou can convert the contents of a varchar(max) or nvarchar(max) column using any of the string conversion methods, such as ToString. The following code fragment assumes a SqlDataReader object named reader that retrieves the data. C#while (reader.Read()) { string str = reader[0].ToString(); Console.WriteLine(str); } ExampleThe following code retrieves the name and the LargePhoto object from the ProductPhoto table in the AdventureWorks database and saves it to a file. The assembly needs to be compiled with a reference to the System.Drawing namespace. The GetSqlBytes method of the SqlDataReader returns a SqlBytes object that exposes a Stream property. The code uses this to create a new Bitmap object, and then saves it in the Gif ImageFormat. C#static private void TestGetSqlBytes(int documentID, string filePath) { // Assumes GetConnectionString returns a valid connection string. using (SqlConnection connection = new SqlConnection(GetConnectionString())) { SqlCommand command = connection.CreateCommand(); SqlDataReader reader = null; try { // Setup the command command.CommandText = "SELECT LargePhotoFileName, LargePhoto " + "FROM Production.ProductPhoto " + "WHERE ProductPhotoID=@ProductPhotoID"; command.CommandType = CommandType.Text; // Declare the parameter SqlParameter paramID = new SqlParameter("@ProductPhotoID", SqlDbType.Int); paramID.Value = documentID; command.Parameters.Add(paramID); connection.Open(); string photoName = null; reader = command.ExecuteReader(CommandBehavior.CloseConnection); if (reader.HasRows) { while (reader.Read()) { // Get the name of the file. photoName = reader.GetString(0); // Ensure that the column isn't null if (reader.IsDBNull(1)) { Console.WriteLine("{0} is unavailable.", photoName); } else { SqlBytes bytes = reader.GetSqlBytes(1); using (Bitmap productImage = new Bitmap(bytes.Stream)) { String fileName = filePath + photoName; // Save in gif format. productImage.Save(fileName, ImageFormat.Gif); Console.WriteLine("Successfully created {0}.", fileName); } } } } else { Console.WriteLine("No records returned."); } } catch (Exception ex) { Console.WriteLine(ex.Message); } finally { if (reader != null) reader.Dispose(); } } } Using Large Value Type ParametersLarge value types can be used in SqlParameter objects the same way you use smaller value types in SqlParameter objects. You can retrieve large value types as SqlParameter values, as shown in the following example. The code assumes that the following GetDocumentSummary stored procedure exists in the AdventureWorks sample database. The stored procedure takes an input parameter named @DocumentID and returns the contents of the DocumentSummary column in the @DocumentSummary output parameter. CREATE PROCEDURE GetDocumentSummary ( @DocumentID int, @DocumentSummary nvarchar(MAX) OUTPUT ) AS SET NOCOUNT ON SELECT @DocumentSummary=Convert(nvarchar(MAX), DocumentSummary) FROM Production.Document WHERE DocumentID=@DocumentID ExampleThe ADO.NET code creates SqlConnection and SqlCommand objects to execute the GetDocumentSummary stored procedure and retrieve the document summary, which is stored as a large value type. The code passes a value for the @DocumentID input parameter, and displays the results passed back in the @DocumentSummary output parameter in the Console window. C#static private string GetDocumentSummary(int documentID) { //Assumes GetConnectionString returns a valid connection string. using (SqlConnection connection = new SqlConnection(GetConnectionString())) { connection.Open(); SqlCommand command = connection.CreateCommand(); try { // Setup the command to execute the stored procedure. command.CommandText = "GetDocumentSummary"; command.CommandType = CommandType.StoredProcedure; // Set up the input parameter for the DocumentID. SqlParameter paramID = new SqlParameter("@DocumentID", SqlDbType.Int); paramID.Value = documentID; command.Parameters.Add(paramID); // Set up the output parameter to retrieve the summary. SqlParameter paramSummary = new SqlParameter("@DocumentSummary", SqlDbType.NVarChar, -1); paramSummary.Direction = ParameterDirection.Output; command.Parameters.Add(paramSummary); // Execute the stored procedure. command.ExecuteNonQuery(); Console.WriteLine((String)(paramSummary.Value)); return (String)(paramSummary.Value); } catch (Exception ex) { Console.WriteLine(ex.Message); return null; } } } See also
FILESTREAM DataThe FILESTREAM storage attribute is for binary (BLOB) data stored in a varbinary(max) column. Before FILESTREAM, storing binary data required special handling. Unstructured data, such as text documents, images and video, is often stored outside of the database, making it difficult to manage. Note You must install the .NET Framework 3.5 SP1 (or later) to work with FILESTREAM data using SqlClient. Specifying the FILESTREAM attribute on a varbinary(max) column causes SQL Server to store the data on the local NTFS file system instead of in the database file. Although it is stored separately, you can use the same Transact-SQL statements that are supported for working with varbinary(max) data that is stored in the database. SqlClient Support for FILESTREAMThe .NET Framework Data Provider for SQL Server, System.Data.SqlClient, supports reading and writing to FILESTREAM data using the SqlFileStream class defined in the System.Data.SqlTypes namespace. SqlFileStream inherits from the Stream class, which provides methods for reading and writing to streams of data. Reading from a stream transfers data from the stream into a data structure, such as an array of bytes. Writing transfers the data from the data structure into a stream. Creating the SQL Server TableThe following Transact-SQL statements creates a table named employees and inserts a row of data. Once you have enabled FILESTREAM storage, you can use this table in conjunction with the code examples that follow. The links to resources in SQL Server Books Online are located at the end of this topic. SQLCREATE TABLE employees ( EmployeeId INT NOT NULL PRIMARY KEY, Photo VARBINARY(MAX) FILESTREAM NULL, RowGuid UNIQUEIDENTIFIER NOT NULL ROWGUIDCOL UNIQUE DEFAULT NEWID() ) GO Insert into employees Values(1, 0x00, default) GO Example: Reading, Overwriting, and Inserting FILESTREAM DataThe following sample demonstrates how to read data from a FILESTREAM. The code gets the logical path to the file, setting the FileAccess to Read and the FileOptions to SequentialScan. The code then reads the bytes from the SqlFileStream into the buffer. The bytes are then written to the console window. The sample also demonstrates how to write data to a FILESTREAM in which all existing data is overwritten. The code gets the logical path to the file and creates the SqlFileStream, setting the FileAccess to Write and the FileOptions to SequentialScan. A single byte is written to the SqlFileStream, replacing any data in the file. The sample also demonstrates how to write data to a FILESTREAM by using the Seek method to append data to the end of the file. The code gets the logical path to the file and creates the SqlFileStream, setting the FileAccess to ReadWrite and the FileOptions to SequentialScan. The code uses the Seek method to seek to the end of the file, appending a single byte to the existing file. C#using System; using System.Data.SqlClient; using System.Data.SqlTypes; using System.Data; using System.IO; namespace FileStreamTest { class Program { static void Main(string[] args) { SqlConnectionStringBuilder builder = new SqlConnectionStringBuilder("server=(local);integrated security=true;database=myDB"); ReadFileStream(builder); OverwriteFileStream(builder); InsertFileStream(builder); Console.WriteLine("Done"); } private static void ReadFileStream(SqlConnectionStringBuilder connStringBuilder) { using (SqlConnection connection = new SqlConnection(connStringBuilder.ToString())) { connection.Open(); SqlCommand command = new SqlCommand("SELECT TOP(1) Photo.PathName(), GET_FILESTREAM_TRANSACTION_CONTEXT() FROM employees", connection); SqlTransaction tran = connection.BeginTransaction(IsolationLevel.ReadCommitted); command.Transaction = tran; using (SqlDataReader reader = command.ExecuteReader()) { while (reader.Read()) { // Get the pointer for the file string path = reader.GetString(0); byte[] transactionContext = reader.GetSqlBytes(1).Buffer; // Create the SqlFileStream using (Stream fileStream = new SqlFileStream(path, transactionContext, FileAccess.Read, FileOptions.SequentialScan, allocationSize: 0)) { // Read the contents as bytes and write them to the console for (long index = 0; index < fileStream.Length; index++) { Console.WriteLine(fileStream.ReadByte()); } } } } tran.Commit(); } } private static void OverwriteFileStream(SqlConnectionStringBuilder connStringBuilder) { using (SqlConnection connection = new SqlConnection(connStringBuilder.ToString())) { connection.Open(); SqlCommand command = new SqlCommand("SELECT TOP(1) Photo.PathName(), GET_FILESTREAM_TRANSACTION_CONTEXT() FROM employees", connection); SqlTransaction tran = connection.BeginTransaction(IsolationLevel.ReadCommitted); command.Transaction = tran; using (SqlDataReader reader = command.ExecuteReader()) { while (reader.Read()) { // Get the pointer for file string path = reader.GetString(0); byte[] transactionContext = reader.GetSqlBytes(1).Buffer; // Create the SqlFileStream using (Stream fileStream = new SqlFileStream(path, transactionContext, FileAccess.Write, FileOptions.SequentialScan, allocationSize: 0)) { // Write a single byte to the file. This will // replace any data in the file. fileStream.WriteByte(0x01); } } } tran.Commit(); } } private static void InsertFileStream(SqlConnectionStringBuilder connStringBuilder) { using (SqlConnection connection = new SqlConnection(connStringBuilder.ToString())) { connection.Open(); SqlCommand command = new SqlCommand("SELECT TOP(1) Photo.PathName(), GET_FILESTREAM_TRANSACTION_CONTEXT() FROM employees", connection); SqlTransaction tran = connection.BeginTransaction(IsolationLevel.ReadCommitted); command.Transaction = tran; using (SqlDataReader reader = command.ExecuteReader()) { while (reader.Read()) { // Get the pointer for file string path = reader.GetString(0); byte[] transactionContext = reader.GetSqlBytes(1).Buffer; using (Stream fileStream = new SqlFileStream(path, transactionContext, FileAccess.ReadWrite, FileOptions.SequentialScan, allocationSize: 0)) { // Seek to the end of the file fileStream.Seek(0, SeekOrigin.End); // Append a single byte fileStream.WriteByte(0x01); } } } tran.Commit(); } } } } For another sample, see How to store and fetch binary data into a file stream column. Resources in SQL Server Books OnlineThe complete documentation for FILESTREAM is located in the following sections in SQL Server Books Online.
See also
Inserting an Image from a FileYou can write a binary large object (BLOB) to a database as either binary or character data, depending on the type of field at your data source. BLOB is a generic term that refers to the text, ntext, and image data types, which typically contain documents and pictures. To write a BLOB value to your database, issue the appropriate INSERT or UPDATE statement and pass the BLOB value as an input parameter (see Configuring Parameters and Parameter Data Types). If your BLOB is stored as text, such as a SQL Server text field, you can pass the BLOB as a string parameter. If the BLOB is stored in binary format, such as a SQL Server image field, you can pass an array of type byte as a binary parameter. ExampleThe following code example adds employee information to the Employees table in the Northwind database. A photo of the employee is read from a file and added to the Photo field in the table, which is an image field. C#public static void AddEmployee( string lastName, string firstName, string title, DateTime hireDate, int reportsTo, string photoFilePath, string connectionString) { byte[] photo = GetPhoto(photoFilePath); using (SqlConnection connection = new SqlConnection( connectionString)) SqlCommand command = new SqlCommand( "INSERT INTO Employees (LastName, FirstName, " + "Title, HireDate, ReportsTo, Photo) " + "Values(@LastName, @FirstName, @Title, " + "@HireDate, @ReportsTo, @Photo)", connection); command.Parameters.Add("@LastName", SqlDbType.NVarChar, 20).Value = lastName; command.Parameters.Add("@FirstName", SqlDbType.NVarChar, 10).Value = firstName; command.Parameters.Add("@Title", SqlDbType.NVarChar, 30).Value = title; command.Parameters.Add("@HireDate", SqlDbType.DateTime).Value = hireDate; command.Parameters.Add("@ReportsTo", SqlDbType.Int).Value = reportsTo; command.Parameters.Add("@Photo", SqlDbType.Image, photo.Length).Value = photo; connection.Open(); command.ExecuteNonQuery(); } } public static byte[] GetPhoto(string filePath) { FileStream stream = new FileStream( filePath, FileMode.Open, FileAccess.Read); BinaryReader reader = new BinaryReader(stream); byte[] photo = reader.ReadBytes((int)stream.Length); reader.Close(); stream.Close(); return photo; } See also
SQL Server Data Operations in ADO.NETThis section describes SQL Server features and functionality that are specific to the .NET Framework Data Provider for SQL Server (System.Data.SqlClient). In This Section
Bulk Copy Operations in SQL Server
Multiple Active Result Sets (MARS)
Asynchronous Operations
Table-Valued Parameters See also
Bulk Copy Operations in SQL ServerMicrosoft SQL Server includes a popular command-line utility named bcp for quickly bulk copying large files into tables or views in SQL Server databases. The SqlBulkCopy class allows you to write managed code solutions that provide similar functionality. There are other ways to load data into a SQL Server table (INSERT statements, for example) but SqlBulkCopy offers a significant performance advantage over them. The SqlBulkCopy class can be used to write data only to SQL Server tables. But the data source is not limited to SQL Server; any data source can be used, as long as the data can be loaded to a DataTable instance or read with a IDataReader instance. Using the SqlBulkCopy class, you can perform:
Note When using .NET Framework version 1.1 or earlier (which does not support the SqlBulkCopy class), you can execute the SQL Server Transact-SQL BULK INSERT statement using the SqlCommand object. In This Section
Bulk Copy Example Setup
Single Bulk Copy Operations
Multiple Bulk Copy Operations
Transaction and Bulk Copy Operations See alsoBulk Copy Example SetupThe SqlBulkCopy class can be used to write data only to SQL Server tables. The code samples shown in this topic use the SQL Server sample database, AdventureWorks. To avoid altering the existing tables code samples write data to tables that you must create first. The BulkCopyDemoMatchingColumns and BulkCopyDemoDifferentColumns tables are both based on the AdventureWorks Production.Products table. In code samples that use these tables, data is added from the Production.Products table to one of these sample tables. The BulkCopyDemoDifferentColumns table is used when the sample illustrates how to map columns from the source data to the destination table; BulkCopyDemoMatchingColumns is used for most other samples. A few of the code samples demonstrate how to use one SqlBulkCopy class to write to multiple tables. For these samples, the BulkCopyDemoOrderHeader and BulkCopyDemoOrderDetail tables are used as the destination tables. These tables are based on the Sales.SalesOrderHeader and Sales.SalesOrderDetail tables in AdventureWorks. Note The SqlBulkCopy code samples are provided to demonstrate the syntax for using SqlBulkCopy only. If the source and destination tables are located in the same SQL Server instance, it is easier and faster to use a Transact-SQL INSERT … SELECT statement to copy the data. Table SetupTo create the tables necessary for the code samples to run correctly, you must run the following Transact-SQL statements in a SQL Server database. USE AdventureWorks IF EXISTS (SELECT * FROM dbo.sysobjects WHERE id = object_id(N'[dbo].[BulkCopyDemoMatchingColumns]') AND OBJECTPROPERTY(id, N'IsUserTable') = 1) DROP TABLE [dbo].[BulkCopyDemoMatchingColumns] CREATE TABLE [dbo].[BulkCopyDemoMatchingColumns]([ProductID] [int] IDENTITY(1,1) NOT NULL, [Name] [nvarchar](50) NOT NULL, [ProductNumber] [nvarchar](25) NOT NULL, CONSTRAINT [PK_ProductID] PRIMARY KEY CLUSTERED ( [ProductID] ASC ) ON [PRIMARY]) ON [PRIMARY] IF EXISTS (SELECT * FROM dbo.sysobjects WHERE id = object_id(N'[dbo].[BulkCopyDemoDifferentColumns]') AND OBJECTPROPERTY(id, N'IsUserTable') = 1) DROP TABLE [dbo].[BulkCopyDemoDifferentColumns] CREATE TABLE [dbo].[BulkCopyDemoDifferentColumns]([ProdID] [int] IDENTITY(1,1) NOT NULL, [ProdNum] [nvarchar](25) NOT NULL, [ProdName] [nvarchar](50) NOT NULL, CONSTRAINT [PK_ProdID] PRIMARY KEY CLUSTERED ( [ProdID] ASC ) ON [PRIMARY]) ON [PRIMARY] IF EXISTS (SELECT * FROM dbo.sysobjects WHERE id = object_id(N'[dbo].[BulkCopyDemoOrderHeader]') AND OBJECTPROPERTY(id, N'IsUserTable') = 1) DROP TABLE [dbo].[BulkCopyDemoOrderHeader] CREATE TABLE [dbo].[BulkCopyDemoOrderHeader]([SalesOrderID] [int] IDENTITY(1,1) NOT NULL, [OrderDate] [datetime] NOT NULL, [AccountNumber] [nvarchar](15) NULL, CONSTRAINT [PK_SalesOrderID] PRIMARY KEY CLUSTERED ( [SalesOrderID] ASC ) ON [PRIMARY]) ON [PRIMARY] IF EXISTS (SELECT * FROM dbo.sysobjects WHERE id = object_id(N'[dbo].[BulkCopyDemoOrderDetail]') AND OBJECTPROPERTY(id, N'IsUserTable') = 1) DROP TABLE [dbo].[BulkCopyDemoOrderDetail] CREATE TABLE [dbo].[BulkCopyDemoOrderDetail]([SalesOrderID] [int] NOT NULL, [SalesOrderDetailID] [int] NOT NULL, [OrderQty] [smallint] NOT NULL, [ProductID] [int] NOT NULL, [UnitPrice] [money] NOT NULL, CONSTRAINT [PK_LineNumber] PRIMARY KEY CLUSTERED ( [SalesOrderID] ASC, [SalesOrderDetailID] ASC ) ON [PRIMARY]) ON [PRIMARY] See alsoSingle Bulk Copy OperationsThe simplest approach to performing a SQL Server bulk copy operation is to perform a single operation against a database. By default, a bulk copy operation is performed as an isolated operation: the copy operation occurs in a non-transacted way, with no opportunity for rolling it back. Note If you need to roll back all or part of the bulk copy when an error occurs, you can either use a SqlBulkCopy-managed transaction, or perform the bulk copy operation within an existing transaction. SqlBulkCopy will also work with System.Transactions if the connection is enlisted (implicitly or explicitly) into a System.Transactions transaction. For more information, see Transaction and Bulk Copy Operations. The general steps for performing a bulk copy operation are as follows:
Caution We recommend that the source and target column data types match. If the data types do not match, SqlBulkCopy attempts to convert each source value to the target data type, using the rules employed by Value. Conversions can affect performance, and also can result in unexpected errors. For example, a Double data type can be converted to a Decimal data type most of the time, but not always. ExampleThe following console application demonstrates how to load data using the SqlBulkCopy class. In this example, a SqlDataReader is used to copy data from the Production.Product table in the SQL Server AdventureWorks database to a similar table in the same database. Important This sample will not run unless you have created the work tables as described in Bulk Copy Example Setup. This code is provided to demonstrate the syntax for using SqlBulkCopy only. If the source and destination tables are located in the same SQL Server instance, it is easier and faster to use a Transact-SQL INSERT … SELECT statement to copy the data. C#using System.Data.SqlClient; class Program { static void Main() { string connectionString = GetConnectionString(); // Open a sourceConnection to the AdventureWorks database. using (SqlConnection sourceConnection = new SqlConnection(connectionString)) { sourceConnection.Open(); // Perform an initial count on the destination table. SqlCommand commandRowCount = new SqlCommand( "SELECT COUNT(*) FROM " + "dbo.BulkCopyDemoMatchingColumns;", sourceConnection); long countStart = System.Convert.ToInt32( commandRowCount.ExecuteScalar()); Console.WriteLine("Starting row count = {0}", countStart); // Get data from the source table as a SqlDataReader. SqlCommand commandSourceData = new SqlCommand( "SELECT ProductID, Name, " + "ProductNumber " + "FROM Production.Product;", sourceConnection); SqlDataReader reader = commandSourceData.ExecuteReader(); // Open the destination connection. In the real world you would // not use SqlBulkCopy to move data from one table to the other // in the same database. This is for demonstration purposes only. using (SqlConnection destinationConnection = new SqlConnection(connectionString)) { destinationConnection.Open(); // Set up the bulk copy object. // Note that the column positions in the source // data reader match the column positions in // the destination table so there is no need to // map columns. using (SqlBulkCopy bulkCopy = new SqlBulkCopy(destinationConnection)) { bulkCopy.DestinationTableName = "dbo.BulkCopyDemoMatchingColumns"; try { // Write from the source to the destination. bulkCopy.WriteToServer(reader); } catch (Exception ex) { Console.WriteLine(ex.Message); } finally { // Close the SqlDataReader. The SqlBulkCopy // object is automatically closed at the end // of the using block. reader.Close(); } } // Perform a final count on the destination // table to see how many rows were added. long countEnd = System.Convert.ToInt32( commandRowCount.ExecuteScalar()); Console.WriteLine("Ending row count = {0}", countEnd); Console.WriteLine("{0} rows were added.", countEnd - countStart); Console.WriteLine("Press Enter to finish."); Console.ReadLine(); } } } private static string GetConnectionString() // To avoid storing the sourceConnection string in your code, // you can retrieve it from a configuration file. { return "Data Source=(local); " + " Integrated Security=true;" + "Initial Catalog=AdventureWorks;"; } } Performing a Bulk Copy Operation Using Transact-SQL and the Command ClassThe following example illustrates how to use the ExecuteNonQuery method to execute the BULK INSERT statement. Note The file path for the data source is relative to the server. The server process must have access to that path in order for the bulk copy operation to succeed. C#using (SqlConnection connection = New SqlConnection(connectionString)) { string queryString = "BULK INSERT Northwind.dbo.[Order Details] " + "FROM 'f:\mydata\data.tbl' " + "WITH ( FORMATFILE='f:\mydata\data.fmt' )"; connection.Open(); SqlCommand command = new SqlCommand(queryString, connection); command.ExecuteNonQuery(); } See alsoMultiple Bulk Copy OperationsYou can perform multiple bulk copy operations using a single instance of a SqlBulkCopy class. If the operation parameters change between copies (for example, the name of the destination table), you must update them prior to any subsequent calls to any of the WriteToServer methods, as demonstrated in the following example. Unless explicitly changed, all property values remain the same as they were on the previous bulk copy operation for a given instance. Note Performing multiple bulk copy operations using the same instance of SqlBulkCopy is usually more efficient than using a separate instance for each operation. If you perform several bulk copy operations using the same SqlBulkCopy object, there are no restrictions on whether source or target information is equal or different in each operation. However, you must ensure that column association information is properly set each time you write to the server. Important This sample will not run unless you have created the work tables as described in Bulk Copy Example Setup. This code is provided to demonstrate the syntax for using SqlBulkCopy only. If the source and destination tables are located in the same SQL Server instance, it is easier and faster to use a Transact-SQL INSERT … SELECT statement to copy the data. C#using System.Data.SqlClient; class Program { static void Main() { string connectionString = GetConnectionString(); // Open a connection to the AdventureWorks database. using (SqlConnection connection = new SqlConnection(connectionString)) { connection.Open(); // Empty the destination tables. SqlCommand deleteHeader = new SqlCommand( "DELETE FROM dbo.BulkCopyDemoOrderHeader;", connection); deleteHeader.ExecuteNonQuery(); SqlCommand deleteDetail = new SqlCommand( "DELETE FROM dbo.BulkCopyDemoOrderDetail;", connection); deleteDetail.ExecuteNonQuery(); // Perform an initial count on the destination // table with matching columns. SqlCommand countRowHeader = new SqlCommand( "SELECT COUNT(*) FROM dbo.BulkCopyDemoOrderHeader;", connection); long countStartHeader = System.Convert.ToInt32( countRowHeader.ExecuteScalar()); Console.WriteLine( "Starting row count for Header table = {0}", countStartHeader); // Perform an initial count on the destination // table with different column positions. SqlCommand countRowDetail = new SqlCommand( "SELECT COUNT(*) FROM dbo.BulkCopyDemoOrderDetail;", connection); long countStartDetail = System.Convert.ToInt32( countRowDetail.ExecuteScalar()); Console.WriteLine( "Starting row count for Detail table = {0}", countStartDetail); // Get data from the source table as a SqlDataReader. // The Sales.SalesOrderHeader and Sales.SalesOrderDetail // tables are quite large and could easily cause a timeout // if all data from the tables is added to the destination. // To keep the example simple and quick, a parameter is // used to select only orders for a particular account // as the source for the bulk insert. SqlCommand headerData = new SqlCommand( "SELECT [SalesOrderID], [OrderDate], " + "[AccountNumber] FROM [Sales].[SalesOrderHeader] " + "WHERE [AccountNumber] = @accountNumber;", connection); SqlParameter parameterAccount = new SqlParameter(); parameterAccount.ParameterName = "@accountNumber"; parameterAccount.SqlDbType = SqlDbType.NVarChar; parameterAccount.Direction = ParameterDirection.Input; parameterAccount.Value = "10-4020-000034"; headerData.Parameters.Add(parameterAccount); SqlDataReader readerHeader = headerData.ExecuteReader(); // Get the Detail data in a separate connection. using (SqlConnection connection2 = new SqlConnection(connectionString)) { connection2.Open(); SqlCommand sourceDetailData = new SqlCommand( "SELECT [Sales].[SalesOrderDetail].[SalesOrderID], [SalesOrderDetailID], " + "[OrderQty], [ProductID], [UnitPrice] FROM [Sales].[SalesOrderDetail] " + "INNER JOIN [Sales].[SalesOrderHeader] ON [Sales].[SalesOrderDetail]." + "[SalesOrderID] = [Sales].[SalesOrderHeader].[SalesOrderID] " + "WHERE [AccountNumber] = @accountNumber;", connection2); SqlParameter accountDetail = new SqlParameter(); accountDetail.ParameterName = "@accountNumber"; accountDetail.SqlDbType = SqlDbType.NVarChar; accountDetail.Direction = ParameterDirection.Input; accountDetail.Value = "10-4020-000034"; sourceDetailData.Parameters.Add(accountDetail); SqlDataReader readerDetail = sourceDetailData.ExecuteReader(); // Create the SqlBulkCopy object. using (SqlBulkCopy bulkCopy = new SqlBulkCopy(connectionString)) { bulkCopy.DestinationTableName = "dbo.BulkCopyDemoOrderHeader"; // Guarantee that columns are mapped correctly by // defining the column mappings for the order. bulkCopy.ColumnMappings.Add("SalesOrderID", "SalesOrderID"); bulkCopy.ColumnMappings.Add("OrderDate", "OrderDate"); bulkCopy.ColumnMappings.Add("AccountNumber", "AccountNumber"); // Write readerHeader to the destination. try { bulkCopy.WriteToServer(readerHeader); } catch (Exception ex) { Console.WriteLine(ex.Message); } finally { readerHeader.Close(); } // Set up the order details destination. bulkCopy.DestinationTableName ="dbo.BulkCopyDemoOrderDetail"; // Clear the ColumnMappingCollection. bulkCopy.ColumnMappings.Clear(); // Add order detail column mappings. bulkCopy.ColumnMappings.Add("SalesOrderID", "SalesOrderID"); bulkCopy.ColumnMappings.Add("SalesOrderDetailID", "SalesOrderDetailID"); bulkCopy.ColumnMappings.Add("OrderQty", "OrderQty"); bulkCopy.ColumnMappings.Add("ProductID", "ProductID"); bulkCopy.ColumnMappings.Add("UnitPrice", "UnitPrice"); // Write readerDetail to the destination. try { bulkCopy.WriteToServer(readerDetail); } catch (Exception ex) { Console.WriteLine(ex.Message); } finally { readerDetail.Close(); } } // Perform a final count on the destination // tables to see how many rows were added. long countEndHeader = System.Convert.ToInt32( countRowHeader.ExecuteScalar()); Console.WriteLine("{0} rows were added to the Header table.", countEndHeader - countStartHeader); long countEndDetail = System.Convert.ToInt32( countRowDetail.ExecuteScalar()); Console.WriteLine("{0} rows were added to the Detail table.", countEndDetail - countStartDetail); Console.WriteLine("Press Enter to finish."); Console.ReadLine(); } } } private static string GetConnectionString() // To avoid storing the connection string in your code, // you can retrieve it from a configuration file. { return "Data Source=(local); " + " Integrated Security=true;" + "Initial Catalog=AdventureWorks;"; } } See alsoTransaction and Bulk Copy OperationsBulk copy operations can be performed as isolated operations or as part of a multiple step transaction. This latter option enables you to perform more than one bulk copy operation within the same transaction, as well as perform other database operations (such as inserts, updates, and deletes) while still being able to commit or roll back the entire transaction. By default, a bulk copy operation is performed as an isolated operation. The bulk copy operation occurs in a non-transacted way, with no opportunity for rolling it back. If you need to roll back all or part of the bulk copy when an error occurs, you can use a SqlBulkCopy-managed transaction, perform the bulk copy operation within an existing transaction, or be enlisted in a System.TransactionsTransaction. Performing a Non-transacted Bulk Copy OperationThe following Console application shows what happens when a non-transacted bulk copy operation encounters an error partway through the operation. In the example, the source table and destination table each include an Identity column named ProductID. The code first prepares the destination table by deleting all rows and then inserting a single row whose ProductID is known to exist in the source table. By default, a new value for the Identity column is generated in the destination table for each row added. In this example, an option is set when the connection is opened that forces the bulk load process to use the Identity values from the source table instead. The bulk copy operation is executed with the BatchSize property set to 10. When the operation encounters the invalid row, an exception is thrown. In this first example, the bulk copy operation is non-transacted. All batches copied up to the point of the error are committed; the batch containing the duplicate key is rolled back, and the bulk copy operation is halted before processing any other batches. Note This sample will not run unless you have created the work tables as described in Bulk Copy Example Setup. This code is provided to demonstrate the syntax for using SqlBulkCopy only. If the source and destination tables are located in the same SQL Server instance, it is easier and faster to use a Transact-SQLINSERT … SELECT statement to copy the data. C#using System.Data.SqlClient; class Program { static void Main() { string connectionString = GetConnectionString(); // Open a sourceConnection to the AdventureWorks database. using (SqlConnection sourceConnection = new SqlConnection(connectionString)) { sourceConnection.Open(); // Delete all from the destination table. SqlCommand commandDelete = new SqlCommand(); commandDelete.Connection = sourceConnection; commandDelete.CommandText = "DELETE FROM dbo.BulkCopyDemoMatchingColumns"; commandDelete.ExecuteNonQuery(); // Add a single row that will result in duplicate key // when all rows from source are bulk copied. // Note that this technique will only be successful in // illustrating the point if a row with ProductID = 446 // exists in the AdventureWorks Production.Products table. // If you have made changes to the data in this table, change // the SQL statement in the code to add a ProductID that // does exist in your version of the Production.Products // table. Choose any ProductID in the middle of the table // (not first or last row) to best illustrate the result. SqlCommand commandInsert = new SqlCommand(); commandInsert.Connection = sourceConnection; commandInsert.CommandText = "SET IDENTITY_INSERT dbo.BulkCopyDemoMatchingColumns ON;" + "INSERT INTO " + "dbo.BulkCopyDemoMatchingColumns " + "([ProductID], [Name] ,[ProductNumber]) " + "VALUES(446, 'Lock Nut 23','LN-3416');" + "SET IDENTITY_INSERT dbo.BulkCopyDemoMatchingColumns OFF"; commandInsert.ExecuteNonQuery(); // Perform an initial count on the destination table. SqlCommand commandRowCount = new SqlCommand( "SELECT COUNT(*) FROM dbo.BulkCopyDemoMatchingColumns;", sourceConnection); long countStart = System.Convert.ToInt32( commandRowCount.ExecuteScalar()); Console.WriteLine("Starting row count = {0}", countStart); // Get data from the source table as a SqlDataReader. SqlCommand commandSourceData = new SqlCommand( "SELECT ProductID, Name, ProductNumber " + "FROM Production.Product;", sourceConnection); SqlDataReader reader = commandSourceData.ExecuteReader(); // Set up the bulk copy object using the KeepIdentity option. using (SqlBulkCopy bulkCopy = new SqlBulkCopy( connectionString, SqlBulkCopyOptions.KeepIdentity)) { bulkCopy.BatchSize = 10; bulkCopy.DestinationTableName = "dbo.BulkCopyDemoMatchingColumns"; // Write from the source to the destination. // This should fail with a duplicate key error // after some of the batches have been copied. try { bulkCopy.WriteToServer(reader); } catch (Exception ex) { Console.WriteLine(ex.Message); } finally { reader.Close(); } } // Perform a final count on the destination // table to see how many rows were added. long countEnd = System.Convert.ToInt32( commandRowCount.ExecuteScalar()); Console.WriteLine("Ending row count = {0}", countEnd); Console.WriteLine("{0} rows were added.", countEnd - countStart); Console.WriteLine("Press Enter to finish."); Console.ReadLine(); } } private static string GetConnectionString() // To avoid storing the sourceConnection string in your code, // you can retrieve it from a configuration file. { return "Data Source=(local); " + " Integrated Security=true;" + "Initial Catalog=AdventureWorks;"; } } Performing a Dedicated Bulk Copy Operation in a TransactionBy default, a bulk copy operation is its own transaction. When you want to perform a dedicated bulk copy operation, create a new instance of SqlBulkCopy with a connection string, or use an existing SqlConnection object without an active transaction. In each scenario, the bulk copy operation creates, and then commits or rolls back the transaction. You can explicitly specify the UseInternalTransaction option in the SqlBulkCopy class constructor to explicitly cause a bulk copy operation to execute in its own transaction, causing each batch of the bulk copy operation to execute within a separate transaction. Note Since different batches are executed in different transactions, if an error occurs during the bulk copy operation, all the rows in the current batch will be rolled back, but rows from previous batches will remain in the database. The following console application is similar to the previous example, with one exception: In this example, the bulk copy operation manages its own transactions. All batches copied up to the point of the error are committed; the batch containing the duplicate key is rolled back, and the bulk copy operation is halted before processing any other batches. Important This sample will not run unless you have created the work tables as described in Bulk Copy Example Setup. This code is provided to demonstrate the syntax for using SqlBulkCopy only. If the source and destination tables are located in the same SQL Server instance, it is easier and faster to use a Transact-SQLINSERT … SELECT statement to copy the data. C#using System.Data.SqlClient; class Program { static void Main() { string connectionString = GetConnectionString(); // Open a sourceConnection to the AdventureWorks database. using (SqlConnection sourceConnection = new SqlConnection(connectionString)) { sourceConnection.Open(); // Delete all from the destination table. SqlCommand commandDelete = new SqlCommand(); commandDelete.Connection = sourceConnection; commandDelete.CommandText = "DELETE FROM dbo.BulkCopyDemoMatchingColumns"; commandDelete.ExecuteNonQuery(); // Add a single row that will result in duplicate key // when all rows from source are bulk copied. // Note that this technique will only be successful in // illustrating the point if a row with ProductID = 446 // exists in the AdventureWorks Production.Products table. // If you have made changes to the data in this table, change // the SQL statement in the code to add a ProductID that // does exist in your version of the Production.Products // table. Choose any ProductID in the middle of the table // (not first or last row) to best illustrate the result. SqlCommand commandInsert = new SqlCommand(); commandInsert.Connection = sourceConnection; commandInsert.CommandText = "SET IDENTITY_INSERT dbo.BulkCopyDemoMatchingColumns ON;" + "INSERT INTO " + "dbo.BulkCopyDemoMatchingColumns " + "([ProductID], [Name] ,[ProductNumber]) " + "VALUES(446, 'Lock Nut 23','LN-3416');" + "SET IDENTITY_INSERT dbo.BulkCopyDemoMatchingColumns OFF"; commandInsert.ExecuteNonQuery(); // Perform an initial count on the destination table. SqlCommand commandRowCount = new SqlCommand( "SELECT COUNT(*) FROM dbo.BulkCopyDemoMatchingColumns;", sourceConnection); long countStart = System.Convert.ToInt32( commandRowCount.ExecuteScalar()); Console.WriteLine("Starting row count = {0}", countStart); // Get data from the source table as a SqlDataReader. SqlCommand commandSourceData = new SqlCommand( "SELECT ProductID, Name, ProductNumber " + "FROM Production.Product;", sourceConnection); SqlDataReader reader = commandSourceData.ExecuteReader(); // Set up the bulk copy object. // Note that when specifying the UseInternalTransaction // option, you cannot also specify an external transaction. // Therefore, you must use the SqlBulkCopy construct that // requires a string for the connection, rather than an // existing SqlConnection object. using (SqlBulkCopy bulkCopy = new SqlBulkCopy( connectionString, SqlBulkCopyOptions.KeepIdentity | SqlBulkCopyOptions.UseInternalTransaction)) { bulkCopy.BatchSize = 10; bulkCopy.DestinationTableName = "dbo.BulkCopyDemoMatchingColumns"; // Write from the source to the destination. // This should fail with a duplicate key error // after some of the batches have been copied. try { bulkCopy.WriteToServer(reader); } catch (Exception ex) { Console.WriteLine(ex.Message); } finally { reader.Close(); } } // Perform a final count on the destination // table to see how many rows were added. long countEnd = System.Convert.ToInt32( commandRowCount.ExecuteScalar()); Console.WriteLine("Ending row count = {0}", countEnd); Console.WriteLine("{0} rows were added.", countEnd - countStart); Console.WriteLine("Press Enter to finish."); Console.ReadLine(); } } private static string GetConnectionString() // To avoid storing the sourceConnection string in your code, // you can retrieve it from a configuration file. { return "Data Source=(local); " + " Integrated Security=true;" + "Initial Catalog=AdventureWorks;"; } } Using Existing TransactionsYou can specify an existing SqlTransaction object as a parameter in a SqlBulkCopy constructor. In this situation, the bulk copy operation is performed in an existing transaction, and no change is made to the transaction state (that is, it is neither committed nor aborted). This allows an application to include the bulk copy operation in a transaction with other database operations. However, if you do not specify a SqlTransaction object and pass a null reference, and the connection has an active transaction, an exception is thrown. If you need to roll back the entire bulk copy operation because an error occurs, or if the bulk copy should execute as part of a larger process that can be rolled back, you can provide a SqlTransaction object to the SqlBulkCopy constructor. The following console application is similar to the first (non-transacted) example, with one exception: in this example, the bulk copy operation is included in a larger, external transaction. When the primary key violation error occurs, the entire transaction is rolled back and no rows are added to the destination table. Important This sample will not run unless you have created the work tables as described in Bulk Copy Example Setup. This code is provided to demonstrate the syntax for using SqlBulkCopy only. If the source and destination tables are located in the same SQL Server instance, it is easier and faster to use a Transact-SQLINSERT … SELECT statement to copy the data. C#using System.Data.SqlClient; class Program { static void Main() { string connectionString = GetConnectionString(); // Open a sourceConnection to the AdventureWorks database. using (SqlConnection sourceConnection = new SqlConnection(connectionString)) { sourceConnection.Open(); // Delete all from the destination table. SqlCommand commandDelete = new SqlCommand(); commandDelete.Connection = sourceConnection; commandDelete.CommandText = "DELETE FROM dbo.BulkCopyDemoMatchingColumns"; commandDelete.ExecuteNonQuery(); // Add a single row that will result in duplicate key // when all rows from source are bulk copied. // Note that this technique will only be successful in // illustrating the point if a row with ProductID = 446 // exists in the AdventureWorks Production.Products table. // If you have made changes to the data in this table, change // the SQL statement in the code to add a ProductID that // does exist in your version of the Production.Products // table. Choose any ProductID in the middle of the table // (not first or last row) to best illustrate the result. SqlCommand commandInsert = new SqlCommand(); commandInsert.Connection = sourceConnection; commandInsert.CommandText = "SET IDENTITY_INSERT dbo.BulkCopyDemoMatchingColumns ON;" + "INSERT INTO " + "dbo.BulkCopyDemoMatchingColumns " + "([ProductID], [Name] ,[ProductNumber]) " + "VALUES(446, 'Lock Nut 23','LN-3416');" + "SET IDENTITY_INSERT dbo.BulkCopyDemoMatchingColumns OFF"; commandInsert.ExecuteNonQuery(); // Perform an initial count on the destination table. SqlCommand commandRowCount = new SqlCommand( "SELECT COUNT(*) FROM dbo.BulkCopyDemoMatchingColumns;", sourceConnection); long countStart = System.Convert.ToInt32( commandRowCount.ExecuteScalar()); Console.WriteLine("Starting row count = {0}", countStart); // Get data from the source table as a SqlDataReader. SqlCommand commandSourceData = new SqlCommand( "SELECT ProductID, Name, ProductNumber " + "FROM Production.Product;", sourceConnection); SqlDataReader reader = commandSourceData.ExecuteReader(); //Set up the bulk copy object inside the transaction. using (SqlConnection destinationConnection = new SqlConnection(connectionString)) { destinationConnection.Open(); using (SqlTransaction transaction = destinationConnection.BeginTransaction()) { using (SqlBulkCopy bulkCopy = new SqlBulkCopy( destinationConnection, SqlBulkCopyOptions.KeepIdentity, transaction)) { bulkCopy.BatchSize = 10; bulkCopy.DestinationTableName = "dbo.BulkCopyDemoMatchingColumns"; // Write from the source to the destination. // This should fail with a duplicate key error. try { bulkCopy.WriteToServer(reader); transaction.Commit(); } catch (Exception ex) { Console.WriteLine(ex.Message); transaction.Rollback(); } finally { reader.Close(); } } } } // Perform a final count on the destination // table to see how many rows were added. long countEnd = System.Convert.ToInt32( commandRowCount.ExecuteScalar()); Console.WriteLine("Ending row count = {0}", countEnd); Console.WriteLine("{0} rows were added.", countEnd - countStart); Console.WriteLine("Press Enter to finish."); Console.ReadLine(); } } private static string GetConnectionString() // To avoid storing the sourceConnection string in your code, // you can retrieve it from a configuration file. { return "Data Source=(local); " + " Integrated Security=true;" + "Initial Catalog=AdventureWorks;"; } } See alsoMultiple Active Result Sets (MARS)Multiple Active Result Sets (MARS) is a feature that allows the execution of multiple batches on a single connection. In previous versions, only one batch could be executed at a time against a single connection. Executing multiple batches with MARS does not imply simultaneous execution of operations. In This Section
Enabling Multiple Active Result Sets
Manipulating Data Related Sections
Asynchronous Operations See alsoEnabling Multiple Active Result SetsMultiple Active Result Sets (MARS) is a feature that works with SQL Server to allow the execution of multiple batches on a single connection. When MARS is enabled for use with SQL Server, each command object used adds a session to the connection. Note A single MARS session opens one logical connection for MARS to use and then one logical connection for each active command. Enabling and Disabling MARS in the Connection StringNote The following connection strings use the sample AdventureWorks database included with SQL Server. The connection strings provided assume that the database is installed on a server named MSSQL1. Modify the connection string as necessary for your environment. The MARS feature is disabled by default. It can be enabled by adding the "MultipleActiveResultSets=True" keyword pair to your connection string. "True" is the only valid value for enabling MARS. The following example demonstrates how to connect to an instance of SQL Server and how to specify that MARS should be enabled. C#string connectionString = "Data Source=MSSQL1;" + "Initial Catalog=AdventureWorks;Integrated Security=SSPI;" + "MultipleActiveResultSets=True"; You can disable MARS by adding the "MultipleActiveResultSets=False" keyword pair to your connection string. "False" is the only valid value for disabling MARS. The following connection string demonstrates how to disable MARS. C#string connectionString = "Data Source=MSSQL1;" + "Initial Catalog=AdventureWorks;Integrated Security=SSPI;" + "MultipleActiveResultSets=False"; Special Considerations When Using MARSIn general, existing applications should not need modification to use a MARS-enabled connection. However, if you wish to use MARS features in your applications, you should understand the following special considerations. Statement InterleavingMARS operations execute synchronously on the server. Statement interleaving of SELECT and BULK INSERT statements is allowed. However, data manipulation language (DML) and data definition language (DDL) statements execute atomically. Any statements attempting to execute while an atomic batch is executing are blocked. Parallel execution at the server is not a MARS feature. If two batches are submitted under a MARS connection, one of them containing a SELECT statement, the other containing a DML statement, the DML can begin execution within execution of the SELECT statement. However, the DML statement must run to completion before the SELECT statement can make progress. If both statements are running under the same transaction, any changes made by a DML statement after the SELECT statement has started execution are not visible to the read operation. A WAITFOR statement inside a SELECT statement does not yield the transaction while it is waiting, that is, until the first row is produced. This implies that no other batches can execute within the same connection while a WAITFOR statement is waiting. MARS Session CacheWhen a connection is opened with MARS enabled, a logical session is created, which adds additional overhead. To minimize overhead and enhance performance, SqlClient caches the MARS session within a connection. The cache contains at most 10 MARS sessions. This value is not user adjustable. If the session limit is reached, a new session is created—an error is not generated. The cache and sessions contained in it are per-connection; they are not shared across connections. When a session is released, it is returned to the pool unless the pool's upper limit has been reached. If the cache pool is full, the session is closed. MARS sessions do not expire. They are only cleaned up when the connection object is disposed. The MARS session cache is not preloaded. It is loaded as the application requires more sessions. Thread SafetyMARS operations are not thread-safe. Connection PoolingMARS-enabled connections are pooled like any other connection. If an application opens two connections, one with MARS enabled and one with MARS disabled, the two connections are in separate pools. For more information, see SQL Server Connection Pooling (ADO.NET). SQL Server Batch Execution EnvironmentWhen a connection is opened, a default environment is defined. This environment is then copied into a logical MARS session. The batch execution environment includes the following components:
With MARS, a default execution environment is associated to a connection. Every new batch that starts executing under a given connection receives a copy of the default environment. Whenever code is executed under a given batch, all changes made to the environment are scoped to the specific batch. Once execution finishes, the execution settings are copied into the default environment. In the case of a single batch issuing several commands to be executed sequentially under the same transaction, semantics are the same as those exposed by connections involving earlier clients or servers. Parallel ExecutionMARS is not designed to remove all requirements for multiple connections in an application. If an application needs true parallel execution of commands against a server, multiple connections should be used. For example, consider the following scenario. Two command objects are created, one for processing a result set and another for updating data; they share a common connection via MARS. In this scenario, the Transaction.Commit fails on the update until all the results have been read on the first command object, yielding the following exception: Message: Transaction context in use by another session. Source: .NET SqlClient Data Provider Expected: (null) Received: System.Data.SqlClient.SqlException There are three options for handling this scenario:
Detecting MARS SupportAn application can check for MARS support by reading the SqlConnection.ServerVersion value. The major number should be 9 for SQL Server 2005 and 10 for SQL Server 2008. See alsoBefore the introduction of Multiple Active Result Sets (MARS), developers had to use either multiple connections or server-side cursors to solve certain scenarios. In addition, when multiple connections were used in a transactional situation, bound connections (with sp_getbindtoken and sp_bindsession) were required. The following scenarios show how to use a MARS-enabled connection instead of multiple connections. Using Multiple Commands with MARSThe following Console application demonstrates how to use two SqlDataReader objects with two SqlCommand objects and a single SqlConnection object with MARS enabled. ExampleThe example opens a single connection to the AdventureWorks database. Using a SqlCommand object, a SqlDataReader is created. As the reader is used, a second SqlDataReader is opened, using data from the first SqlDataReader as input to the WHERE clause for the second reader. Note The following example uses the sample AdventureWorks database included with SQL Server. The connection string provided in the sample code assumes that the database is installed and available on the local computer. Modify the connection string as necessary for your environment. C#using System; using System.Data; using System.Data.SqlClient; class Class1 { static void Main() { // By default, MARS is disabled when connecting // to a MARS-enabled host. // It must be enabled in the connection string. string connectionString = GetConnectionString(); int vendorID; SqlDataReader productReader = null; string vendorSQL = "SELECT VendorId, Name FROM Purchasing.Vendor"; string productSQL = "SELECT Production.Product.Name FROM Production.Product " + "INNER JOIN Purchasing.ProductVendor " + "ON Production.Product.ProductID = " + "Purchasing.ProductVendor.ProductID " + "WHERE Purchasing.ProductVendor.VendorID = @VendorId"; using (SqlConnection awConnection = new SqlConnection(connectionString)) { SqlCommand vendorCmd = new SqlCommand(vendorSQL, awConnection); SqlCommand productCmd = new SqlCommand(productSQL, awConnection); productCmd.Parameters.Add("@VendorId", SqlDbType.Int); awConnection.Open(); using (SqlDataReader vendorReader = vendorCmd.ExecuteReader()) { while (vendorReader.Read()) { Console.WriteLine(vendorReader["Name"]); vendorID = (int)vendorReader["VendorId"]; productCmd.Parameters["@VendorId"].Value = vendorID; // The following line of code requires // a MARS-enabled connection. productReader = productCmd.ExecuteReader(); using (productReader) { while (productReader.Read()) { Console.WriteLine(" " + productReader["Name"].ToString()); } } } } Console.WriteLine("Press any key to continue"); Console.ReadLine(); } } private static string GetConnectionString() { // To avoid storing the connection string in your code, // you can retrieve it from a configuration file. return "Data Source=(local);Integrated Security=SSPI;" + "Initial Catalog=AdventureWorks;MultipleActiveResultSets=True"; } } Reading and Updating Data with MARSMARS allows a connection to be used for both read operations and data manipulation language (DML) operations with more than one pending operation. This feature eliminates the need for an application to deal with connection-busy errors. In addition, MARS can replace the user of server-side cursors, which generally consume more resources. Finally, because multiple operations can operate on a single connection, they can share the same transaction context, eliminating the need to use sp_getbindtoken and sp_bindsession system stored procedures. ExampleThe following Console application demonstrates how to use two SqlDataReader objects with three SqlCommand objects and a single SqlConnection object with MARS enabled. The first command object retrieves a list of vendors whose credit rating is 5. The second command object uses the vendor ID provided from a SqlDataReader to load the second SqlDataReader with all of the products for the particular vendor. Each product record is visited by the second SqlDataReader. A calculation is performed to determine what the new OnOrderQty should be. The third command object is then used to update the ProductVendor table with the new value. This entire process takes place within a single transaction, which is rolled back at the end. Note The following example uses the sample AdventureWorks database included with SQL Server. The connection string provided in the sample code assumes that the database is installed and available on the local computer. Modify the connection string as necessary for your environment. C#using System; using System.Collections.Generic; using System.Text; using System.Data; using System.Data.SqlClient; class Program { static void Main() { // By default, MARS is disabled when connecting // to a MARS-enabled host. // It must be enabled in the connection string. string connectionString = GetConnectionString(); SqlTransaction updateTx = null; SqlCommand vendorCmd = null; SqlCommand prodVendCmd = null; SqlCommand updateCmd = null; SqlDataReader prodVendReader = null; int vendorID = 0; int productID = 0; int minOrderQty = 0; int maxOrderQty = 0; int onOrderQty = 0; int recordsUpdated = 0; int totalRecordsUpdated = 0; string vendorSQL = "SELECT VendorID, Name FROM Purchasing.Vendor " + "WHERE CreditRating = 5"; string prodVendSQL = "SELECT ProductID, MaxOrderQty, MinOrderQty, OnOrderQty " + "FROM Purchasing.ProductVendor " + "WHERE VendorID = @VendorID"; string updateSQL = "UPDATE Purchasing.ProductVendor " + "SET OnOrderQty = @OrderQty " + "WHERE ProductID = @ProductID AND VendorID = @VendorID"; using (SqlConnection awConnection = new SqlConnection(connectionString)) { awConnection.Open(); updateTx = awConnection.BeginTransaction(); vendorCmd = new SqlCommand(vendorSQL, awConnection); vendorCmd.Transaction = updateTx; prodVendCmd = new SqlCommand(prodVendSQL, awConnection); prodVendCmd.Transaction = updateTx; prodVendCmd.Parameters.Add("@VendorId", SqlDbType.Int); updateCmd = new SqlCommand(updateSQL, awConnection); updateCmd.Transaction = updateTx; updateCmd.Parameters.Add("@OrderQty", SqlDbType.Int); updateCmd.Parameters.Add("@ProductID", SqlDbType.Int); updateCmd.Parameters.Add("@VendorID", SqlDbType.Int); using (SqlDataReader vendorReader = vendorCmd.ExecuteReader()) { while (vendorReader.Read()) { Console.WriteLine(vendorReader["Name"]); vendorID = (int) vendorReader["VendorID"]; prodVendCmd.Parameters["@VendorID"].Value = vendorID; prodVendReader = prodVendCmd.ExecuteReader(); using (prodVendReader) { while (prodVendReader.Read()) { productID = (int) prodVendReader["ProductID"]; if (prodVendReader["OnOrderQty"] == DBNull.Value) { minOrderQty = (int) prodVendReader["MinOrderQty"]; onOrderQty = minOrderQty; } else { maxOrderQty = (int) prodVendReader["MaxOrderQty"]; onOrderQty = (int)(maxOrderQty / 2); } updateCmd.Parameters["@OrderQty"].Value = onOrderQty; updateCmd.Parameters["@ProductID"].Value = productID; updateCmd.Parameters["@VendorID"].Value = vendorID; recordsUpdated = updateCmd.ExecuteNonQuery(); totalRecordsUpdated += recordsUpdated; } } } } Console.WriteLine("Total Records Updated: " + totalRecordsUpdated.ToString()); updateTx.Rollback(); Console.WriteLine("Transaction Rolled Back"); } Console.WriteLine("Press any key to continue"); Console.ReadLine(); } private static string GetConnectionString() { // To avoid storing the connection string in your code, // you can retrieve it from a configuration file. return "Data Source=(local);Integrated Security=SSPI;" + "Initial Catalog=AdventureWorks;" + "MultipleActiveResultSets=True"; } } See alsoAsynchronous OperationsSome database operations, such as command executions, can take significant time to complete. In such a case, single-threaded applications must block other operations and wait for the command to finish before they can continue their own operations. In contrast, being able to assign the long-running operation to a background thread allows the foreground thread to remain active throughout the operation. In a Windows application, for example, delegating the long-running operation to a background thread allows the user interface thread to remain responsive while the operation is executing. The .NET Framework provides several standard asynchronous design patterns that developers can use to take advantage of background threads and free the user interface or high-priority threads to complete other operations. ADO.NET supports these same design patterns in its SqlCommand class. Specifically, the BeginExecuteNonQuery, BeginExecuteReader, and BeginExecuteXmlReader methods, paired with the EndExecuteNonQuery, EndExecuteReader, and EndExecuteXmlReader methods, provide the asynchronous support. Note Asynchronous programming is a core feature of the .NET Framework, and ADO.NET takes full advantage of the standard design patterns. For more information about the different asynchronous techniques available to developers, see Calling Synchronous Methods Asynchronously. Although using asynchronous techniques with ADO.NET features does not add any special considerations, it is likely that more developers will use asynchronous features in ADO.NET than in other areas of the .NET Framework. It is important to be aware of the benefits and pitfalls of creating multithreaded applications. The examples that follow in this section point out several important issues that developers will need to take into account when building applications that incorporate multithreaded functionality. In This Section
Windows Applications Using Callbacks
ASP.NET Applications Using Wait Handles
Polling in Console Applications See also
Windows Applications Using CallbacksIn most asynchronous processing scenarios, you want to start a database operation and continue running other processes without waiting for the database operation to complete. However, many scenarios require doing something once the database operation has ended. In a Windows application, for example, you may want to delegate the long-running operation to a background thread while allowing the user interface thread to remain responsive. However, when the database operation is complete, you want to use the results to populate the form. This type of scenario is best implemented with a callback. You define a callback by specifying an AsyncCallback delegate in the BeginExecuteNonQuery, BeginExecuteReader, or BeginExecuteXmlReader method. The delegate is called when the operation is complete. You can pass the delegate a reference to the SqlCommand itself, making it easy to access the SqlCommand object and call the appropriate End method without having to use a global variable. ExampleThe following Windows application demonstrates the use of the BeginExecuteNonQuery method, executing a Transact-SQL statement that includes a delay of a few seconds (emulating a long-running command). This example demonstrates a number of important techniques, including calling a method that interacts with the form from a separate thread. In addition, this example demonstrates how you must block users from concurrently executing a command multiple times, and how you must ensure that the form does not close before the callback procedure is called. To set up this example, create a new Windows application. Place a Button control and two Label controls on the form (accepting the default name for each control). Add the following code to the form's class, modifying the connection string as necessary for your environment. C#// Add these to the top of the class, if they're not already there: using System; using System.Data; using System.Data.SqlClient; // Hook up the form's Load event handler (you can double-click on // the form's design surface in Visual Studio), and then add // this code to the form's class: // You'll need this delegate in order to display text from a thread // other than the form's thread. See the HandleCallback // procedure for more information. // This same delegate matches both the DisplayStatus // and DisplayResults methods. private delegate void DisplayInfoDelegate(string Text); // This flag ensures that the user doesn't attempt // to restart the command or close the form while the // asynchronous command is executing. private bool isExecuting; // This example maintains the connection object // externally, so that it's available for closing. private SqlConnection connection; private static string GetConnectionString() { // To avoid storing the connection string in your code, // you can retrieve it from a configuration file. // If you have not included "Asynchronous Processing=true" in the // connection string, the command will not be able // to execute asynchronously. return "Data Source=(local);Integrated Security=SSPI;" + "Initial Catalog=AdventureWorks; Asynchronous Processing=true"; } private void DisplayStatus(string Text) { this.label1.Text = Text; } private void DisplayResults(string Text) { this.label1.Text = Text; DisplayStatus("Ready"); } private void Form1_FormClosing(object sender, System.Windows.Forms.FormClosingEventArgs e) { if (isExecuting) { MessageBox.Show(this, "Can't close the form until " + "the pending asynchronous command has completed. Please " + "wait..."); e.Cancel = true; } } private void button1_Click(object sender, System.EventArgs e) { if (isExecuting) { MessageBox.Show(this, "Already executing. Please wait until " + "the current query has completed."); } else { SqlCommand command = null; try { DisplayResults(""); DisplayStatus("Connecting..."); connection = new SqlConnection(GetConnectionString()); // To emulate a long-running query, wait for // a few seconds before working with the data. // This command doesn't do much, but that's the point-- // it doesn't change your data, in the long run. string commandText = "WAITFOR DELAY '0:0:05';" + "UPDATE Production.Product " + "SET ReorderPoint = ReorderPoint + 1 " + "WHERE ReorderPoint Is Not Null;" + "UPDATE Production.Product " + "SET ReorderPoint = ReorderPoint - 1 " + "WHERE ReorderPoint Is Not Null"; command = new SqlCommand(commandText, connection); connection.Open(); DisplayStatus("Executing..."); isExecuting = true; // Although it's not required that you pass the // SqlCommand object as the second parameter in the // BeginExecuteNonQuery call, doing so makes it easier // to call EndExecuteNonQuery in the callback procedure. AsyncCallback callback = new AsyncCallback(HandleCallback); // Once the BeginExecuteNonQuery method is called, // the code continues--and the user can interact with // the form--while the server executes the query. command.BeginExecuteNonQuery(callback, command); } catch (Exception ex) { isExecuting = false; DisplayStatus($"Ready (last error: {ex.Message})"); if (connection != null) { connection.Close(); } } } } private void HandleCallback(IAsyncResult result) { try { // Retrieve the original command object, passed // to this procedure in the AsyncState property // of the IAsyncResult parameter. SqlCommand command = (SqlCommand)result.AsyncState; int rowCount = command.EndExecuteNonQuery(result); string rowText = " rows affected."; if (rowCount == 1) { rowText = " row affected."; } rowText = rowCount + rowText; // You may not interact with the form and its contents // from a different thread, and this callback procedure // is all but guaranteed to be running from a different thread // than the form. Therefore you cannot simply call code that // displays the results, like this: // DisplayResults(rowText) // Instead, you must call the procedure from the form's thread. // One simple way to accomplish this is to call the Invoke // method of the form, which calls the delegate you supply // from the form's thread. DisplayInfoDelegate del = new DisplayInfoDelegate(DisplayResults); this.Invoke(del, rowText); } catch (Exception ex) { // Because you're now running code in a separate thread, // if you don't handle the exception here, none of your other // code will catch the exception. Because none of your // code is on the call stack in this thread, there's nothing // higher up the stack to catch the exception if you don't // handle it here. You can either log the exception or // invoke a delegate (as in the non-error case in this // example) to display the error on the form. In no case // can you simply display the error without executing a // delegate as in the try block here. // You can create the delegate instance as you // invoke it, like this: this.Invoke(new DisplayInfoDelegate(DisplayStatus), $"Ready (last error: {ex.Message}"); } finally { isExecuting = false; if (connection != null) { connection.Close(); } } } private void Form1_Load(object sender, System.EventArgs e) { this.button1.Click += new System.EventHandler(this.button1_Click); this.FormClosing += new System.Windows.Forms. FormClosingEventHandler(this.Form1_FormClosing); } See alsoASP.NET Applications Using Wait HandlesThe callback and polling models for handling asynchronous operations are useful when your application is processing only one asynchronous operation at a time. The Wait models provide a more flexible way of processing multiple asynchronous operations. There are two Wait models, named for the WaitHandle methods used to implement them: the Wait (Any) model and the Wait (All) model. To use either Wait model, you need to use the AsyncWaitHandle property of the IAsyncResult object returned by the BeginExecuteNonQuery, BeginExecuteReader, or BeginExecuteXmlReader methods. The WaitAny and WaitAll methods both require you to send the WaitHandle objects as an argument, grouped together in an array. Both Wait methods monitor the asynchronous operations, waiting for completion. The WaitAny method waits for any of the operations to complete or time out. Once you know a particular operation is complete, you can process its results and then continue waiting for the next operation to complete or time out. The WaitAll method waits for all of the processes in the array of WaitHandle instances to complete or time out before continuing. The Wait models' benefit is most striking when you need to run multiple operations of some length on different servers, or when your server is powerful enough to process all the queries at the same time. In the examples presented here, three queries emulate long processes by adding WAITFOR commands of varying lengths to inconsequential SELECT queries. Example: Wait (Any) ModelThe following example illustrates the Wait (Any) model. Once three asynchronous processes are started, the WaitAny method is called to wait for the completion of any one of them. As each process completes, the EndExecuteReader method is called and the resulting SqlDataReader object is read. At this point, a real-world application would likely use the SqlDataReader to populate a portion of the page. In this simple example, the time the process completed is added to a text box corresponding to the process. Taken together, the times in the text boxes illustrate the point: Code is executed each time a process completes. To set up this example, create a new ASP.NET Web Site project. Place a Button control and four TextBox controls on the page (accepting the default name for each control). Add the following code to the form's class, modifying the connection string as necessary for your environment. C#// Add the following using statements, if they are not already there. using System; using System.Data; using System.Configuration; using System.Web; using System.Web.Security; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Web.UI.HtmlControls; using System.Threading; using System.Data.SqlClient; // Add this code to the page's class string GetConnectionString() // To avoid storing the connection string in your code, // you can retrieve it from a configuration file. // If you have not included "Asynchronous Processing=true" // in the connection string, the command will not be able // to execute asynchronously. { return "Data Source=(local);Integrated Security=SSPI;" + "Initial Catalog=AdventureWorks;" + "Asynchronous Processing=true"; } void Button1_Click(object sender, System.EventArgs e) { // In a real-world application, you might be connecting to // three different servers or databases. For the example, // we connect to only one. SqlConnection connection1 = new SqlConnection(GetConnectionString()); SqlConnection connection2 = new SqlConnection(GetConnectionString()); SqlConnection connection3 = new SqlConnection(GetConnectionString()); // To keep the example simple, all three asynchronous // processes select a row from the same table. WAITFOR // commands are used to emulate long-running processes // that complete after different periods of time. string commandText1 = "WAITFOR DELAY '0:0:01';" + "SELECT * FROM Production.Product " + "WHERE ProductNumber = 'BL-2036'"; string commandText2 = "WAITFOR DELAY '0:0:05';" + "SELECT * FROM Production.Product " + "WHERE ProductNumber = 'BL-2036'"; string commandText3 = "WAITFOR DELAY '0:0:10';" + "SELECT * FROM Production.Product " + "WHERE ProductNumber = 'BL-2036'"; try // For each process, open a connection and begin // execution. Use the IAsyncResult object returned by // BeginExecuteReader to add a WaitHandle for the // process to the array. { connection1.Open(); SqlCommand command1 = new SqlCommand(commandText1, connection1); IAsyncResult result1 = command1.BeginExecuteReader(); WaitHandle waitHandle1 = result1.AsyncWaitHandle; connection2.Open(); SqlCommand command2 = new SqlCommand(commandText2, connection2); IAsyncResult result2 = command2.BeginExecuteReader(); WaitHandle waitHandle2 = result2.AsyncWaitHandle; connection3.Open(); SqlCommand command3 = new SqlCommand(commandText3, connection3); IAsyncResult result3 = command3.BeginExecuteReader(); WaitHandle waitHandle3 = result3.AsyncWaitHandle; WaitHandle[] waitHandles = { waitHandle1, waitHandle2, waitHandle3 }; int index; for (int countWaits = 0; countWaits <= 2; countWaits++) { // WaitAny waits for any of the processes to // complete. The return value is either the index // of the array element whose process just // completed, or the WaitTimeout value. index = WaitHandle.WaitAny(waitHandles, 60000, false); // This example doesn't actually do anything with // the data returned by the processes, but the // code opens readers for each just to demonstrate // the concept. // Instead of using the returned data to fill the // controls on the page, the example adds the time // the process was completed to the corresponding // text box. switch (index) { case 0: SqlDataReader reader1; reader1 = command1.EndExecuteReader(result1); if (reader1.Read()) { TextBox1.Text = "Completed " + System.DateTime.Now.ToLongTimeString(); } reader1.Close(); break; case 1: SqlDataReader reader2; reader2 = command2.EndExecuteReader(result2); if (reader2.Read()) { TextBox2.Text = "Completed " + System.DateTime.Now.ToLongTimeString(); } reader2.Close(); break; case 2: SqlDataReader reader3; reader3 = command3.EndExecuteReader(result3); if (reader3.Read()) { TextBox3.Text = "Completed " + System.DateTime.Now.ToLongTimeString(); } reader3.Close(); break; case WaitHandle.WaitTimeout: throw new Exception("Timeout"); break; } } } catch (Exception ex) { TextBox4.Text = ex.ToString(); } connection1.Close(); connection2.Close(); connection3.Close(); } Example: Wait (All) ModelThe following example illustrates the Wait (All) model. Once three asynchronous processes are started, the WaitAll method is called to wait for the processes to complete or time out. Like the example of the Wait (Any) model, the time the process completed is added to a text box corresponding to the process. Again, the times in the text boxes illustrate the point: Code following the WaitAny method is executed only after all processes are complete. To set up this example, create a new ASP.NET Web Site project. Place a Button control and four TextBox controls on the page (accepting the default name for each control). Add the following code to the form's class, modifying the connection string as necessary for your environment. C#// Add the following using statements, if they are not already there. using System; using System.Data; using System.Configuration; using System.Web; using System.Web.Security; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Web.UI.HtmlControls; using System.Threading; using System.Data.SqlClient; // Add this code to the page's class string GetConnectionString() // To avoid storing the connection string in your code, // you can retrieve it from a configuration file. // If you have not included "Asynchronous Processing=true" // in the connection string, the command will not be able // to execute asynchronously. { return "Data Source=(local);Integrated Security=SSPI;" + "Initial Catalog=AdventureWorks;" + "Asynchronous Processing=true"; } void Button1_Click(object sender, System.EventArgs e) { // In a real-world application, you might be connecting to // three different servers or databases. For the example, // we connect to only one. SqlConnection connection1 = new SqlConnection(GetConnectionString()); SqlConnection connection2 = new SqlConnection(GetConnectionString()); SqlConnection connection3 = new SqlConnection(GetConnectionString()); // To keep the example simple, all three asynchronous // processes execute UPDATE queries that result in // no change to the data. WAITFOR // commands are used to emulate long-running processes // that complete after different periods of time. string commandText1 = "UPDATE Production.Product " + "SET ReorderPoint = ReorderPoint + 1 " + "WHERE ReorderPoint Is Not Null;" + "WAITFOR DELAY '0:0:01';" + "UPDATE Production.Product " + "SET ReorderPoint = ReorderPoint - 1 " + "WHERE ReorderPoint Is Not Null"; string commandText2 = "UPDATE Production.Product " + "SET ReorderPoint = ReorderPoint + 1 " + "WHERE ReorderPoint Is Not Null;" + "WAITFOR DELAY '0:0:05';" + "UPDATE Production.Product " + "SET ReorderPoint = ReorderPoint - 1 " + "WHERE ReorderPoint Is Not Null"; string commandText3 = "UPDATE Production.Product " + "SET ReorderPoint = ReorderPoint + 1 " + "WHERE ReorderPoint Is Not Null;" + "WAITFOR DELAY '0:0:10';" + "UPDATE Production.Product " + "SET ReorderPoint = ReorderPoint - 1 " + "WHERE ReorderPoint Is Not Null"; try // For each process, open a connection and begin // execution. Use the IAsyncResult object returned by // BeginExecuteReader to add a WaitHandle for the // process to the array. { connection1.Open(); SqlCommand command1 = new SqlCommand(commandText1, connection1); IAsyncResult result1 = command1.BeginExecuteNonQuery(); WaitHandle waitHandle1 = result1.AsyncWaitHandle; connection2.Open(); SqlCommand command2 = new SqlCommand(commandText2, connection2); IAsyncResult result2 = command2.BeginExecuteNonQuery(); WaitHandle waitHandle2 = result2.AsyncWaitHandle; connection3.Open(); SqlCommand command3 = new SqlCommand(commandText3, connection3); IAsyncResult result3 = command3.BeginExecuteNonQuery(); WaitHandle waitHandle3 = result3.AsyncWaitHandle; WaitHandle[] waitHandles = { waitHandle1, waitHandle2, waitHandle3 }; bool result; // WaitAll waits for all of the processes to // complete. The return value is True if the processes // all completed successfully, False if any process // timed out. result = WaitHandle.WaitAll(waitHandles, 60000, false); if(result) { long rowCount1 = command1.EndExecuteNonQuery(result1); TextBox1.Text = "Completed " + System.DateTime.Now.ToLongTimeString(); long rowCount2 = command2.EndExecuteNonQuery(result2); TextBox2.Text = "Completed " + System.DateTime.Now.ToLongTimeString(); long rowCount3 = command3.EndExecuteNonQuery(result3); TextBox3.Text = "Completed " + System.DateTime.Now.ToLongTimeString(); } else { throw new Exception("Timeout"); } } catch (Exception ex) { TextBox4.Text = ex.ToString(); } connection1.Close(); connection2.Close(); connection3.Close(); } See alsoPolling in Console ApplicationsAsynchronous operations in ADO.NET allow you to initiate time-consuming database operations on one thread while performing other tasks on another thread. In most scenarios, however, you will eventually reach a point where your application should not continue until the database operation is complete. For such cases, it is useful to poll the asynchronous operation to determine whether the operation has completed or not. You can use the IsCompleted property to find out whether or not the operation has completed. ExampleThe following console application updates data within the AdventureWorks sample database, doing its work asynchronously. In order to emulate a long-running process, this example inserts a WAITFOR statement in the command text. Normally, you would not try to make your commands run slower, but doing so in this case makes it easier to demonstrate asynchronous behavior. C#using System; using System.Data; using System.Data.SqlClient; class Class1 { [STAThread] static void Main() { // The WAITFOR statement simply adds enough time to // prove the asynchronous nature of the command. string commandText = "UPDATE Production.Product SET ReorderPoint = " + "ReorderPoint + 1 " + "WHERE ReorderPoint Is Not Null;" + "WAITFOR DELAY '0:0:3';" + "UPDATE Production.Product SET ReorderPoint = " + "ReorderPoint - 1 " + "WHERE ReorderPoint Is Not Null"; RunCommandAsynchronously( commandText, GetConnectionString()); Console.WriteLine("Press Enter to continue."); Console.ReadLine(); } private static void RunCommandAsynchronously( string commandText, string connectionString) { // Given command text and connection string, asynchronously // execute the specified command against the connection. // For this example, the code displays an indicator as it's // working, verifying the asynchronous behavior. using (SqlConnection connection = new SqlConnection(connectionString)) { try { int count = 0; SqlCommand command = new SqlCommand(commandText, connection); connection.Open(); IAsyncResult result = command.BeginExecuteNonQuery(); while (!result.IsCompleted) { Console.WriteLine( "Waiting ({0})", count++); // Wait for 1/10 second, so the counter // doesn't consume all available // resources on the main thread. System.Threading.Thread.Sleep(100); } Console.WriteLine( "Command complete. Affected {0} rows.", command.EndExecuteNonQuery(result)); } catch (SqlException ex) { Console.WriteLine("Error ({0}): {1}", ex.Number, ex.Message); } catch (InvalidOperationException ex) { Console.WriteLine("Error: {0}", ex.Message); } catch (Exception ex) { // You might want to pass these errors // back out to the caller. Console.WriteLine("Error: {0}", ex.Message); } } } private static string GetConnectionString() { // To avoid storing the connection string in your code, // you can retrieve it from a configuration file. // If you have not included "Asynchronous Processing=true" // in the connection string, the command will not be able // to execute asynchronously. return "Data Source=(local);Integrated Security=SSPI;" + "Initial Catalog=AdventureWorks; " + "Asynchronous Processing=true"; } } See alsoTable-Valued ParametersTable-valued parameters provide an easy way to marshal multiple rows of data from a client application to SQL Server without requiring multiple round trips or special server-side logic for processing the data. You can use table-valued parameters to encapsulate rows of data in a client application and send the data to the server in a single parameterized command. The incoming data rows are stored in a table variable that can then be operated on by using Transact-SQL. Column values in table-valued parameters can be accessed using standard Transact-SQL SELECT statements. Table-valued parameters are strongly typed and their structure is automatically validated. The size of table-valued parameters is limited only by server memory. Note You cannot return data in a table-valued parameter. Table-valued parameters are input-only; the OUTPUT keyword is not supported. For more information about table-valued parameters, see the following resources.
Passing Multiple Rows in Previous Versions of SQL ServerBefore table-valued parameters were introduced to SQL Server 2008, the options for passing multiple rows of data to a stored procedure or a parameterized SQL command were limited. A developer could choose from the following options for passing multiple rows to the server:
Creating Table-Valued Parameter TypesTable-valued parameters are based on strongly-typed table structures that are defined by using Transact-SQL CREATE TYPE statements. You have to create a table type and define the structure in SQL Server before you can use table-valued parameters in your client applications. For more information about creating table types, see User-Defined Table Types in SQL Server Books Online. The following statement creates a table type named CategoryTableType that consists of CategoryID and CategoryName columns: CREATE TYPE dbo.CategoryTableType AS TABLE ( CategoryID int, CategoryName nvarchar(50) ) After you create a table type, you can declare table-valued parameters based on that type. The following Transact-SQL fragment demonstrates how to declare a table-valued parameter in a stored procedure definition. Note that the READONLY keyword is required for declaring a table-valued parameter. CREATE PROCEDURE usp_UpdateCategories (@tvpNewCategories dbo.CategoryTableType READONLY) Modifying Data with Table-Valued Parameters (Transact-SQL)Table-valued parameters can be used in set-based data modifications that affect multiple rows by executing a single statement. For example, you can select all the rows in a table-valued parameter and insert them into a database table, or you can create an update statement by joining a table-valued parameter to the table you want to update. The following Transact-SQL UPDATE statement demonstrates how to use a table-valued parameter by joining it to the Categories table. When you use a table-valued parameter with a JOIN in a FROM clause, you must also alias it, as shown here, where the table-valued parameter is aliased as "ec": UPDATE dbo.Categories SET Categories.CategoryName = ec.CategoryName FROM dbo.Categories INNER JOIN @tvpEditedCategories AS ec ON dbo.Categories.CategoryID = ec.CategoryID; This Transact-SQL example demonstrates how to select rows from a table-valued parameter to perform an INSERT in a single set-based operation. INSERT INTO dbo.Categories (CategoryID, CategoryName) SELECT nc.CategoryID, nc.CategoryName FROM @tvpNewCategories AS nc; Limitations of Table-Valued ParametersThere are several limitations to table-valued parameters:
Configuring a SqlParameter ExampleSystem.Data.SqlClient supports populating table-valued parameters from DataTable, DbDataReader or IEnumerable<T> \ SqlDataRecord objects. You must specify a type name for the table-valued parameter by using the TypeName property of a SqlParameter. The TypeName must match the name of a compatible type previously created on the server. The following code fragment demonstrates how to configure SqlParameter to insert data. In the following example, the addedCategories variable contains a DataTable. To see how the variable is populated, see the examples in the next section, Passing a Table-Valued Parameter to a Stored Procedure. C#// Configure the command and parameter. SqlCommand insertCommand = new SqlCommand(sqlInsert, connection); SqlParameter tvpParam = insertCommand.Parameters.AddWithValue("@tvpNewCategories", addedCategories); tvpParam.SqlDbType = SqlDbType.Structured; tvpParam.TypeName = "dbo.CategoryTableType"; You can also use any object derived from DbDataReader to stream rows of data to a table-valued parameter, as shown in this fragment: C#// Configure the SqlCommand and table-valued parameter. SqlCommand insertCommand = new SqlCommand("usp_InsertCategories", connection); insertCommand.CommandType = CommandType.StoredProcedure; SqlParameter tvpParam = insertCommand.Parameters.AddWithValue("@tvpNewCategories", dataReader); tvpParam.SqlDbType = SqlDbType.Structured; Passing a Table-Valued Parameter to a Stored ProcedureThis example demonstrates how to pass table-valued parameter data to a stored procedure. The code extracts added rows into a new DataTable by using the GetChanges method. The code then defines a SqlCommand, setting the CommandType property to StoredProcedure. The SqlParameter is populated by using the AddWithValue method and the SqlDbType is set to Structured. The SqlCommand is then executed by using the ExecuteNonQuery method. C#// Assumes connection is an open SqlConnection object. using (connection) { // Create a DataTable with the modified rows. DataTable addedCategories = CategoriesDataTable.GetChanges(DataRowState.Added); // Configure the SqlCommand and SqlParameter. SqlCommand insertCommand = new SqlCommand("usp_InsertCategories", connection); insertCommand.CommandType = CommandType.StoredProcedure; SqlParameter tvpParam = insertCommand.Parameters.AddWithValue("@tvpNewCategories", addedCategories); tvpParam.SqlDbType = SqlDbType.Structured; // Execute the command. insertCommand.ExecuteNonQuery(); } Passing a Table-Valued Parameter to a Parameterized SQL StatementThe following example demonstrates how to insert data into the dbo.Categories table by using an INSERT statement with a SELECT subquery that has a table-valued parameter as the data source. When passing a table-valued parameter to a parameterized SQL statement, you must specify a type name for the table-valued parameter by using the new TypeName property of a SqlParameter. This TypeName must match the name of a compatible type previously created on the server. The code in this example uses the TypeName property to reference the type structure defined in dbo.CategoryTableType. Note If you supply a value for an identity column in a table-valued parameter, you must issue the SET IDENTITY_INSERT statement for the session. C#// Assumes connection is an open SqlConnection. using (connection) { // Create a DataTable with the modified rows. DataTable addedCategories = CategoriesDataTable.GetChanges(DataRowState.Added); // Define the INSERT-SELECT statement. string sqlInsert = "INSERT INTO dbo.Categories (CategoryID, CategoryName)" + " SELECT nc.CategoryID, nc.CategoryName" + " FROM @tvpNewCategories AS nc;" // Configure the command and parameter. SqlCommand insertCommand = new SqlCommand(sqlInsert, connection); SqlParameter tvpParam = insertCommand.Parameters.AddWithValue("@tvpNewCategories", addedCategories); tvpParam.SqlDbType = SqlDbType.Structured; tvpParam.TypeName = "dbo.CategoryTableType"; // Execute the command. insertCommand.ExecuteNonQuery(); } Streaming Rows with a DataReaderYou can also use any object derived from DbDataReader to stream rows of data to a table-valued parameter. The following code fragment demonstrates retrieving data from an Oracle database by using an OracleCommand and an OracleDataReader. The code then configures a SqlCommand to invoke a stored procedure with a single input parameter. The SqlDbType property of the SqlParameter is set to Structured. The AddWithValue passes the OracleDataReader result set to the stored procedure as a table-valued parameter. C#// Assumes connection is an open SqlConnection. // Retrieve data from Oracle. OracleCommand selectCommand = new OracleCommand( "Select CategoryID, CategoryName FROM Categories;", oracleConnection); OracleDataReader oracleReader = selectCommand.ExecuteReader( CommandBehavior.CloseConnection); // Configure the SqlCommand and table-valued parameter. SqlCommand insertCommand = new SqlCommand( "usp_InsertCategories", connection); insertCommand.CommandType = CommandType.StoredProcedure; SqlParameter tvpParam = insertCommand.Parameters.AddWithValue( "@tvpNewCategories", oracleReader); tvpParam.SqlDbType = SqlDbType.Structured; // Execute the command. insertCommand.ExecuteNonQuery(); See also
SQL Server Features and ADO.NETThe topics in this section discuss features in SQL Server that are targeted at developing database applications using ADO.NET. For more information, see SQL Server Books Online for the version of SQL Server you are using, as listed in the following table. SQL Server Books Online In This Section
Enumerating Instances of SQL Server (ADO.NET)
Provider Statistics for SQL Server
SQL Server Express User Instances
Database Mirroring in SQL Server
SQL Server Common Language Runtime Integration
Query Notifications in SQL Server
Snapshot Isolation in SQL Server
SqlClient Support for High Availability, Disaster Recovery
SqlClient Support for LocalDB See also
Enumerating Instances of SQL Server (ADO.NET)SQL Server permits applications to find SQL Server instances within the current network. The SqlDataSourceEnumerator class exposes this information to the application developer, providing a DataTable containing information about all the visible servers. This returned table contains a list of server instances available on the network that matches the list provided when a user attempts to create a new connection, and expands the drop-down list containing all the available servers on the Connection Properties dialog box. The results displayed are not always complete. Note As with most Windows services, it is best to run the SQL Browser service with the least possible privileges. See SQL Server Books Online for more information on the SQL Browser service, and how to manage its behavior. Retrieving an Enumerator InstanceIn order to retrieve the table containing information about the available SQL Server instances, you must first retrieve an enumerator, using the shared/static Instance property: C#System.Data.Sql.SqlDataSourceEnumerator instance = System.Data.Sql.SqlDataSourceEnumerator.Instance Once you have retrieved the static instance, you can call the GetDataSources method, which returns a DataTable containing information about the available servers: C#System.Data.DataTable dataTable = instance.GetDataSources(); The table returned from the method call contains the following columns, all of which contain string values:
Enumeration LimitationsAll of the available servers may or may not be listed. The list can vary depending on factors such as timeouts and network traffic. This can cause the list to be different on two consecutive calls. Only servers on the same network will be listed. Broadcast packets typically won't traverse routers, which is why you may not see a server listed, but it will be stable across calls. Listed servers may or may not have additional information such as IsClustered and version. This is dependent on how the list was obtained. Servers listed through the SQL Server browser service will have more details than those found through the Windows infrastructure, which will list only the name. Note Server enumeration is only available when running in full-trust. Assemblies running in a partially-trusted environment will not be able to use it, even if they have the SqlClientPermission Code Access Security (CAS) permission. SQL Server provides information for the SqlDataSourceEnumerator through the use of an external Windows service named SQL Browser. This service is enabled by default, but administrators may turn it off or disable it, making the server instance invisible to this class. ExampleThe following console application retrieves information about all of the visible SQL Server instances and displays the information in the console window. C#using System.Data.Sql; class Program { static void Main() { // Retrieve the enumerator instance and then the data. SqlDataSourceEnumerator instance = SqlDataSourceEnumerator.Instance; System.Data.DataTable table = instance.GetDataSources(); // Display the contents of the table. DisplayData(table); Console.WriteLine("Press any key to continue."); Console.ReadKey(); } private static void DisplayData(System.Data.DataTable table) { foreach (System.Data.DataRow row in table.Rows) { foreach (System.Data.DataColumn col in table.Columns) { Console.WriteLine("{0} = {1}", col.ColumnName, row[col]); } Console.WriteLine("============================"); } } } See alsoProvider Statistics for SQL ServerStarting with the .NET Framework version 2.0, the .NET Framework Data Provider for SQL Server supports run-time statistics. You must enable statistics by setting the StatisticsEnabled property of the SqlConnection object to True after you have a valid connection object created. After statistics are enabled, you can review them as a "snapshot in time" by retrieving an IDictionary reference via the RetrieveStatistics method of the SqlConnection object. You enumerate through the list as a set of name/value pair dictionary entries. These name/value pairs are unordered. At any time, you can call the ResetStatistics method of the SqlConnection object to reset the counters. If statistic gathering has not been enabled, an exception is not generated. In addition, if RetrieveStatistics is called without StatisticsEnabled having been called first, the values retrieved are the initial values for each entry. If you enable statistics, run your application for a while, and then disable statistics, the values retrieved will reflect the values collected up to the point where statistics were disabled. All statistical values gathered are on a per-connection basis. Statistical Values AvailableCurrently there are 18 different items available from the Microsoft SQL Server provider. The number of items available can be accessed via the Count property of the IDictionary interface reference returned by RetrieveStatistics. All of the counters for provider statistics use the common language runtime Int64 type (long in C# and Visual Basic), which is 64 bits wide. The maximum value of the int64 data type, as defined by the int64.MaxValue field, is ((2^63)-1)). When the values for the counters reach this maximum value, they should no longer be considered accurate. This means that int64.MaxValue-1((2^63)-2) is effectively the greatest valid value for any statistic. Note A dictionary is used for returning provider statistics because the number, names and order of the returned statistics may change in the future. Applications should not rely on a specific value being found in the dictionary, but should instead check whether the value is there and branch accordingly. The following table describes the current statistical values available. Note that the key names for the individual values are not localized across regional versions of the Microsoft .NET Framework.
Retrieving a ValueThe following console application shows how to enable statistics on a connection, retrieve four individual statistic values, and write them out to the console window. Note The following example uses the sample AdventureWorks database included with SQL Server. The connection string provided in the sample code assumes the database is installed and available on the local computer. Modify the connection string as necessary for your environment. C#using System; using System.Collections; using System.Collections.Generic; using System.Data; using System.Data.SqlClient; namespace CS_Stats_Console_GetValue { class Program { static void Main(string[] args) { string connectionString = GetConnectionString(); using (SqlConnection awConnection = new SqlConnection(connectionString)) { // StatisticsEnabled is False by default. // It must be set to True to start the // statistic collection process. awConnection.StatisticsEnabled = true; string productSQL = "SELECT * FROM Production.Product"; SqlDataAdapter productAdapter = new SqlDataAdapter(productSQL, awConnection); DataSet awDataSet = new DataSet(); awConnection.Open(); productAdapter.Fill(awDataSet, "ProductTable"); // Retrieve the current statistics as // a collection of values at this point // and time. IDictionary currentStatistics = awConnection.RetrieveStatistics(); Console.WriteLine("Total Counters: " + currentStatistics.Count.ToString()); Console.WriteLine(); // Retrieve a few individual values // related to the previous command. long bytesReceived = (long) currentStatistics["BytesReceived"]; long bytesSent = (long) currentStatistics["BytesSent"]; long selectCount = (long) currentStatistics["SelectCount"]; long selectRows = (long) currentStatistics["SelectRows"]; Console.WriteLine("BytesReceived: " + bytesReceived.ToString()); Console.WriteLine("BytesSent: " + bytesSent.ToString()); Console.WriteLine("SelectCount: " + selectCount.ToString()); Console.WriteLine("SelectRows: " + selectRows.ToString()); Console.WriteLine(); Console.WriteLine("Press any key to continue"); Console.ReadLine(); } } private static string GetConnectionString() { // To avoid storing the connection string in your code, // you can retrieve it from a configuration file. return "Data Source=localhost;Integrated Security=SSPI;" + "Initial Catalog=AdventureWorks"; } } } Retrieving All ValuesThe following console application shows how to enable statistics on a connection, retrieve all available statistic values using the enumerator, and write them to the console window. Note The following example uses the sample AdventureWorks database included with SQL Server. The connection string provided in the sample code assumes the database is installed and available on the local computer. Modify the connection string as necessary for your environment. C#using System; using System.Collections; using System.Collections.Generic; using System.Text; using System.Data; using System.Data.SqlClient; namespace CS_Stats_Console_GetAll { class Program { static void Main(string[] args) { string connectionString = GetConnectionString(); using (SqlConnection awConnection = new SqlConnection(connectionString)) { // StatisticsEnabled is False by default. // It must be set to True to start the // statistic collection process. awConnection.StatisticsEnabled = true; string productSQL = "SELECT * FROM Production.Product"; SqlDataAdapter productAdapter = new SqlDataAdapter(productSQL, awConnection); DataSet awDataSet = new DataSet(); awConnection.Open(); productAdapter.Fill(awDataSet, "ProductTable"); // Retrieve the current statistics as // a collection of values at this point // and time. IDictionary currentStatistics = awConnection.RetrieveStatistics(); Console.WriteLine("Total Counters: " + currentStatistics.Count.ToString()); Console.WriteLine(); Console.WriteLine("Key Name and Value"); // Note the entries are unsorted. foreach (DictionaryEntry entry in currentStatistics) { Console.WriteLine(entry.Key.ToString() + ": " + entry.Value.ToString()); } Console.WriteLine(); Console.WriteLine("Press any key to continue"); Console.ReadLine(); } } private static string GetConnectionString() { // To avoid storing the connection string in your code, // you can retrieve it from a configuration file. return "Data Source=localhost;Integrated Security=SSPI;" + "Initial Catalog=AdventureWorks"; } } } See alsoSQL Server Express User InstancesMicrosoft SQL Server Express Edition (SQL Server Express) supports the user instance feature, which is only available when using the .NET Framework Data Provider for SQL Server (SqlClient). A user instance is a separate instance of the SQL Server Express Database Engine that is generated by a parent instance. User instances allow users who are not administrators on their local computers to attach and connect to SQL Server Express databases. Each instance runs under the security context of the individual user, on a one-instance-per-user basis. User Instance CapabilitiesUser instances are useful for users who are running Windows under a least-privilege user account (LUA) because each user has SQL Server system administrator (sysadmin) privileges over the instance running on her computer without needing to run as a Windows administrator as well. Software executing on a user instance with limited permissions cannot make system-wide changes because the instance of SQL Server Express is running under the non-administrator Windows account of the user, not as a service. Each user instance is isolated from its parent instance and from any other user instances running on the same computer. Databases running on a user instance are opened in single-user mode only, and it is not possible for multiple users to connect to databases running on a user instance. Replication and distributed queries are also disabled for user instances. For more information, see "User Instances" in SQL Server Books Online. Note User instances are not needed for users who are already administrators on their own computers, or for scenarios involving multiple database users. Enabling User InstancesTo generate user instances, a parent instance of SQL Server Express must be running. User instances are enabled by default when SQL Server Express is installed, and they can be explicitly enabled or disabled by a system administrator executing the sp_configure system stored procedure on the parent instance. -- Enable user instances. sp_configure 'user instances enabled','1' -- Disable user instances. sp_configure 'user instances enabled','0' The network protocol for user instances must be local Named Pipes. A user instance cannot be started on a remote instance of SQL Server, and SQL Server logins are not allowed. Connecting to a User InstanceThe User Instance and AttachDBFilenameConnectionString keywords allow a SqlConnection to connect to a user instance. User instances are also supported by the SqlConnectionStringBuilderUserInstance and AttachDBFilename properties. Note the following about the sample connection string shown below:
Data Source=.\\SQLExpress;Integrated Security=true; User Instance=true;AttachDBFilename=|DataDirectory|\InstanceDB.mdf; Initial Catalog=InstanceDB; Note You can also use the SqlConnectionStringBuilderUserInstance and AttachDBFilename properties to build a connection string at run time. Using the |DataDirectory| Substitution StringAttachDbFileName was extended in ADO.NET 2.0 with the introduction of the |DataDirectory| (enclosed in pipe symbols) substitution string. DataDirectory is used in conjunction with AttachDbFileName to indicate a relative path to a data file, allowing developers to create connection strings that are based on a relative path to the data source instead of being required to specify a full path. The physical location that DataDirectory points to depends on the type of application. In this example, the Northwind.mdf file to be attached is located in the application's \app_data folder. Data Source=.\\SQLExpress;Integrated Security=true; User Instance=true; AttachDBFilename=|DataDirectory|\app_data\Northwind.mdf; Initial Catalog=Northwind; When DataDirectory is used, the resulting file path cannot be higher in the directory structure than the directory pointed to by the substitution string. For example, if the fully expanded DataDirectory is C:\AppDirectory\app_data, then the sample connection string shown above works because it is below c:\AppDirectory. However, attempting to specify DataDirectory as |DataDirectory|\..\data will result in an error because \data is not a subdirectory of \AppDirectory. If the connection string has an improperly formatted substitution string, an ArgumentException will be thrown. Note System.Data.SqlClient resolves the substitution strings into full paths against the local computer file system. Therefore, remote server, HTTP, and UNC path names are not supported. An exception is thrown when the connection is opened if the server is not located on the local computer. When the SqlConnection is opened, it is redirected from the default SQL Server Express instance to a run-time initiated instance running under the caller's account. Note It may be necessary to increase the ConnectionTimeout value since user instances may take longer to load than regular instances. The following code fragment opens a new SqlConnection, displays the connection string in the console window, and then closes the connection when exiting the using code block. C#private static void OpenSqlConnection() { // Retrieve the connection string. string connectionString = GetConnectionString(); using (SqlConnection connection = new SqlConnection(connectionString)) { connection.Open(); Console.WriteLine("ConnectionString: {0}", connection.ConnectionString); } } Note User instances are not supported in common language runtime (CLR) code that is running inside of SQL Server. An InvalidOperationException is thrown if Open is called on a SqlConnection that has User Instance=true in the connection string. Lifetime of a User Instance ConnectionUnlike versions of SQL Server that run as a service, SQL Server Express instances do not need to be manually started and stopped. Each time a user logs in and connects to a user instance, the user instance is started if it is not already running. User instance databases have the AutoClose option set so that the database is automatically shut down after a period of inactivity. The sqlservr.exe process that is started is kept running for a limited time-out period after the last connection to the instance is closed, so it does not need to be restarted if another connection is opened before the time-out has expired. The user instance automatically shuts down if no new connection opens before that time-out period has expired. A system administrator on the parent instance can set the duration of the time-out period for a user instance by using sp_configure to change the user instance timeout option. The default is 60 minutes. Note If Min Pool Size is used in the connection string with a value greater than zero, the connection pooler will always maintain a few opened connections, and the user instance will not automatically shut down. How User Instances WorkThe first time a user instance is generated for each user, the master and msdb system databases are copied from the Template Data folder to a path under the user's local application data repository directory for exclusive use by the user instance. This path is typically C:\Documents and Settings\<UserName>\Local Settings\Application Data\Microsoft\Microsoft SQL Server Data\SQLEXPRESS. When a user instance starts up, the tempdb, log, and trace files are also written to this directory. A name is generated for the instance, which is guaranteed to be unique for each user. By default all members of the Windows Builtin\Users group are granted permissions to connect on the local instance as well as read and execute permissions on the SQL Server binaries. Once the credentials of the calling user hosting the user instance have been verified, that user becomes the sysadmin on that instance. Only shared memory is enabled for user instances, which means that only operations on the local machine are possible. Users must be granted both read and write permissions on the .mdf and .ldf files specified in the connection string. Note The .mdf and .ldf files represent the database and log files, respectively. These two files are a matched set, so care must be taken during backup and restore operations. The database file contains information about the exact version of the log file, and the database will not open if it is coupled with the wrong log file. To avoid data corruption, a database in the user instance is opened with exclusive access. If two different user instances share the same database on the same computer, the user on the first instance must close the database before it can be opened in a second instance. User Instance ScenariosUser instances provide developers of database applications with a SQL Server data store that does not depend on developers having administrative accounts on their development computers. User instances are based on the Access/Jet model, where the database application simply connects to a file, and the user automatically has full permissions on all of the database objects without needing the intervention of a system administrator to grant permissions. It is intended to work in situations where the user is running under a least-privilege user account (LUA) and does not have administrative privileges on the server or local machine, yet needs to create database objects and applications. User instances allow users to create instances at run time that run under the user's own security context, and not in the security context of a more privileged system service. Important User instances should only be used in scenarios where all the applications using it are fully trusted. User instance scenarios include:
See also
Database Mirroring in SQL ServerDatabase mirroring in SQL Server allows you to keep a copy, or mirror, of a SQL Server database on a standby server. Mirroring ensures that two separate copies of the data exist at all times, providing high availability and complete data redundancy. The .NET Data Provider for SQL Server provides implicit support for database mirroring, so that the developer does not need to take any action or write any code once it has been configured for a SQL Server database. In addition, the SqlConnection object supports an explicit connection mode that allows supplying the name of a failover partner server in the ConnectionString. The following simplified sequence of events occurs for a SqlConnection object that targets a database configured for mirroring:
Specifying the Failover Partner in the Connection StringIf you supply the name of a failover partner server in the connection string, the client will transparently attempt a connection with the failover partner if the principal database is unavailable when the client application first connects. ";Failover Partner=PartnerServerName" If you omit the name of the failover partner server and the principal database is unavailable when the client application first connects then a SqlException is raised. When a SqlConnection is successfully opened, the failover partner name is returned by the server and supersedes any values supplied in the connection string. Note You must explicitly specify the initial catalog or database name in the connection string for database mirroring scenarios. If the client receives failover information on a connection that doesn't have an explicitly specified initial catalog or database, the failover information is not cached and the application does not attempt to fail over if the principal server fails. If a connection string has a value for the failover partner, but no value for the initial catalog or database, an InvalidArgumentException is raised. Retrieving the Current Server NameIn the event of a failover, you can retrieve the name of the server to which the current connection is actually connected by using the DataSource property of a SqlConnection object. The following code fragment retrieves the name of the active server, assuming that the connection variable references an open SqlConnection. When a failover event occurs and the connection is switched to the mirror server, the DataSource property is updated to reflect the mirror name. C#string activeServer = connection.DataSource; SqlClient Mirroring BehaviorThe client always tries to connect to the current principal server. If it fails, it tries the failover partner. If the mirror database has already been switched to the principal role on the partner server, the connection succeeds and the new principal-mirror mapping is sent to the client and cached for the lifetime of the calling AppDomain. It is not stored in persistent storage and is not available for subsequent connections in a different AppDomain or process. However, it is available for subsequent connections within the same AppDomain. Note that another AppDomain or process running on the same or a different computer always has its pool of connections, and those connections are not reset. In that case, if the primary database goes down, each process or AppDomain fails once, and the pool is automatically cleared. Note Mirroring support on the server is configured on a per-database basis. If data manipulation operations are executed against other databases not included in the principal/mirror set, either by using multipart names or by changing the current database, the changes to these other databases do not propagate in the event of failure. No error is generated when data is modified in a database that is not mirrored. The developer must evaluate the possible impact of such operations. Database Mirroring ResourcesFor conceptual documentation and information on configuring, deploying and administering mirroring, see the following resources in SQL Server documentation.
See alsoSQL Server Common Language Runtime IntegrationSQL Server 2005 introduced the integration of the common language runtime (CLR) component of the .NET Framework for Microsoft Windows. This means that you can write stored procedures, triggers, user-defined types, user-defined functions, user-defined aggregates, and streaming table-valued functions, using any .NET Framework language, including Microsoft Visual Basic .NET and Microsoft Visual C#. The Microsoft.SqlServer.Server namespace contains a set of new application programming interfaces (APIs) so that managed code can interact with the Microsoft SQL Server environment. This section describes features and behaviors that are specific to SQL Server common language runtime (CLR) integration and the SQL Server in-process specific extensions to ADO.NET. This section is meant to provide only enough information to get started programming with SQL Server CLR integration, and is not meant to be comprehensive. For more detailed information, see the version of SQL Server Books Online for the version of SQL Server you are using. SQL Server Books Online In This Section
Introduction to SQL Server CLR Integration
CLR User-Defined Functions
CLR User-Defined Types
CLR Stored Procedures
CLR Triggers
The Context Connection
SQL Server In-Process-Specific Behavior of ADO.NET See alsoIntroduction to SQL Server CLR IntegrationThe common language runtime (CLR) is the heart of the Microsoft .NET Framework and provides the execution environment for all .NET Framework code. Code that runs within the CLR is referred to as managed code. The CLR provides various functions and services required for program execution, including just-in-time (JIT) compilation, allocating and managing memory, enforcing type safety, exception handling, thread management, and security. With the CLR hosted in Microsoft SQL Server (called CLR integration), you can author stored procedures, triggers, user-defined functions, user-defined types, and user-defined aggregates in managed code. Because managed code compiles to native code prior to execution, you can achieve significant performance increases in some scenarios. Managed code uses Code Access Security (CAS), code links, and application domains to prevent assemblies from performing certain operations. SQL Server uses CAS to help secure the managed code and prevent compromise of the operating system or database server. This section is meant to provide only enough information to get started programming with SQL Server CLR integration, and is not meant to be comprehensive. For more detailed information, see the version of SQL Server Books Online for the version of SQL Server you are using. SQL Server Books Online Enabling CLR IntegrationThe common language runtime (CLR) integration feature is off by default in Microsoft SQL Server, and must be enabled in order to use objects that are implemented using CLR integration. To enable CLR integration using Transact-SQL, use the clr enabled option of the sp_configure stored procedure as shown: sp_configure 'clr enabled', 1 GO RECONFIGURE GO You can disable CLR integration by setting the clr enabled option to 0. When you disable CLR integration, SQL Server stops executing all CLR routines and unloads all application domains. For more detailed information, see the version of SQL Server Books Online for the version of SQL Server you are using. SQL Server Books Online Deploying a CLR AssemblyOnce the CLR methods have been tested and verified on the test server, they can be distributed to production servers using a deployment script. The deployment script can be generated manually, or by using SQL Server Management Studio. For more detailed information, see the version of SQL Server Books Online for the version of SQL Server you are using. SQL Server Books Online CLR Integration SecurityThe security model of the Microsoft SQL Server integration with the Microsoft .NET Framework common language runtime (CLR) manages and secures access between different types of CLR and non-CLR objects running within SQL Server. These objects may be called by a Transact-SQL statement or another CLR object running in the server. For more detailed information, see the version of SQL Server Books Online for the version of SQL Server you are using. SQL Server Books Online Debugging a CLR AssemblyMicrosoft SQL Server provides support for debugging Transact-SQL and common language runtime (CLR) objects in the database. Debugging works across languages: users can step seamlessly into CLR objects from Transact-SQL, and vice versa. For more detailed information, see the version of SQL Server Books Online for the version of SQL Server you are using. SQL Server Books Online See alsoCLR User-Defined FunctionsUser-defined functions are routines that can take parameters, perform calculations or other actions, and return a result. You can write user-defined functions in any Microsoft .NET Framework programming language, such as Microsoft Visual Basic .NET or Microsoft Visual C#. For more detailed information, see CLR User-Defined Functions. See also
CLR User-Defined TypesMicrosoft SQL Server provides support for user-defined types (UDTs) implemented with the Microsoft .NET Framework common language runtime (CLR). The CLR is integrated into SQL Server, and this mechanism enables you to extend the type system of the database. UDTs provide user extensibility of the SQL Server data type system, and also the ability to define complex structured types. UDTs can provide two key benefits from an application architecture perspective:
For more detailed information, see the SQL Server documentation for the version of SQL Server you're using. SQL Server documentation See alsoCLR Stored ProceduresStored procedures are routines that cannot be used in scalar expressions. They can return tabular results and messages to the client, invoke data definition language (DDL) and data manipulation language (DML) statements, and return output parameters. Note Microsoft Visual Basic does not support output parameters in the same way that Microsoft Visual C# does. You must specify to pass the parameter by reference and apply the <Out()> attribute to represent an output parameter, as in the following: VBPublic Shared Sub ExecuteToClient( <Out()> ByRef number As Integer) For more detailed information, see the version of SQL Server documentation for the version of SQL Server you're using. SQL Server documentation See also
CLR TriggersA trigger is a special type of stored procedure that automatically runs when a language event executes. Because of the Microsoft SQL Server integration with the .NET Framework common language runtime (CLR), you can use any .NET Framework language to create CLR triggers. For more detailed information, see the SQL Server documentation for the version of SQL Server you're using. SQL Server documentation See alsoThe Context ConnectionThe problem of internal data access is a fairly common scenario. That is, you wish to access the same server on which your common language runtime (CLR) stored procedure or function is executing. One option is to create a connection using SqlConnection, specify a connection string that points to the local server, and open the connection. This requires specifying credentials for logging in. The connection is in a different database session than the stored procedure or function, it may have different SET options, it is in a separate transaction, it does not see your temporary tables, and so on. If your managed stored procedure or function code is executing in the SQL Server process, it is because someone connected to that server and executed a SQL statement to invoke it. You probably want the stored procedure or function to execute in the context of that connection, along with its transaction, SET options, and so on. This is called the context connection. The context connection lets you execute Transact-SQL statements in the same context that your code was invoked in the first place. For more detailed information, see the version of SQL Server Books Online for the version of SQL Server you are using. SQL Server Books Online See alsoSQL Server In-Process-Specific Behavior of ADO.NETThere are four main functional extensions to ADO.NET, found in the Microsoft.SqlServer.Server namespace, that are specifically for in-process use: SqlContext, SqlPipe, SqlTriggerContext, and SqlDataRecord. For more detailed information, see the version of SQL Server Books Online for the version of SQL Server you are using. SQL Server Books Online See alsoQuery Notifications in SQL ServerBuilt upon the Service Broker infrastructure, query notifications allow applications to be notified when data has changed. This feature is particularly useful for applications that provide a cache of information from a database, such as a Web application, and need to be notified when the source data is changed. There are three ways you can implement query notifications using ADO.NET:
Query notifications are used for applications that need to refresh displays or caches in response to changes in underlying data. Microsoft SQL Server allows .NET Framework applications to send a command to SQL Server and request notification if executing the same command would produce result sets different from those initially retrieved. Notifications generated at the server are sent through queues to be processed later. You can set up notifications for SELECT and EXECUTE statements. When using an EXECUTE statement, SQL Server registers a notification for the command executed rather than the EXECUTE statement itself. The command must meet the requirements and limitations for a SELECT statement. When a command that registers a notification contains more than one statement, the Database Engine creates a notification for each statement in the batch. If you are developing an application where you need reliable sub-second notifications when data changes, review the sections Planning an Efficient Query Notifications Strategy and Alternatives to Query Notifications in the Planning for Notifications topic in SQL Server Books Online. For more information about Query Notifications and SQL Server Service Broker, see the following links to topics in SQL Server Books Online. SQL Server documentation In This Section
Enabling Query Notifications
SqlDependency in an ASP.NET Application
Detecting Changes with SqlDependency
SqlCommand Execution with a SqlNotificationRequest Reference
SqlNotificationRequest
SqlDependency
SqlCacheDependency See alsoEnabling Query NotificationsApplications that consume query notifications have a common set of requirements. Your data source must be correctly configured to support SQL query notifications, and the user must have the correct client-side and server-side permissions. To use query notifications you must:
Query Notifications RequirementsQuery notifications are supported only for SELECT statements that meet a list of specific requirements. The following table provides links to the Service Broker and Query Notifications documentation in SQL Server Books Online. SQL Server documentation Enabling Query Notifications to Run Sample CodeTo enable Service Broker on the AdventureWorks database by using SQL Server Management Studio, execute the following Transact-SQL statement: ALTER DATABASE AdventureWorks SET ENABLE_BROKER; For the query notification samples to run correctly, the following Transact-SQL statements must be executed on the database server. CREATE QUEUE ContactChangeMessages; CREATE SERVICE ContactChangeNotifications ON QUEUE ContactChangeMessages ([http://schemas.microsoft.com/SQL/Notifications/PostQueryNotification]); Query Notifications PermissionsUsers who execute commands requesting notification must have SUBSCRIBE QUERY NOTIFICATIONS database permission on the server. Client-side code that runs in a partial trust situation requires the SqlClientPermission. The following code creates a SqlClientPermission object, setting the PermissionState to Unrestricted. The Demand will force a SecurityException at run time if all callers higher in the call stack have not been granted the permission. C#// Code requires directives to // System.Security.Permissions and // System.Data.SqlClient private bool CanRequestNotifications() { SqlClientPermission permission = new SqlClientPermission( PermissionState.Unrestricted); try { permission.Demand(); return true; } catch (System.Exception) { return false; } } Choosing a Notification ObjectThe query notifications API provides two objects to process notifications: SqlDependency and SqlNotificationRequest. In general, most non-ASP.NET applications should use the SqlDependency object. ASP.NET applications should use the higher-level SqlCacheDependency, which wraps SqlDependency and provides a framework for administering the notification and cache objects. Using SqlDependencyTo use SqlDependency, Service Broker must be enabled for the SQL Server database being used, and users must have permissions to receive notifications. Service Broker objects, such as the notification queue, are predefined. In addition, SqlDependency automatically launches a worker thread to process notifications as they are posted to the queue; it also parses the Service Broker message, exposing the information as event argument data. SqlDependency must be initialized by calling the Start method to establish a dependency to the database. This is a static method that needs to be called only once during application initialization for each database connection required. The Stop method should be called at application termination for each dependency connection that was made. Using SqlNotificationRequestIn contrast, SqlNotificationRequest requires you to implement the entire listening infrastructure yourself. In addition, all the supporting Service Broker objects such as the queue, service, and message types supported by the queue must be defined. This manual approach is useful if your application requires special notification messages or notification behaviors, or if your application is part of a larger Service Broker application. See alsoSqlDependency in an ASP.NET ApplicationThe example in this section shows how to use SqlDependency indirectly by leveraging the ASP.NET SqlCacheDependency object. The SqlCacheDependency object uses a SqlDependency to listen for notifications and correctly update the cache. Note The sample code assumes that you have enabled query notifications by executing the scripts in Enabling Query Notifications. About the Sample ApplicationThe sample application uses a single ASP.NET Web page to display product information from the AdventureWorks SQL Server database in a GridView control. When the page loads, the code writes the current time to a Label control. It then defines a SqlCacheDependency object and sets properties on the Cache object to store the cache data for up to three minutes. The code then connects to the database and retrieves the data. When the page is loaded and the application is running ASP.NET will retrieve data from the cache, which you can verify by noting that the time on the page does not change. If the data being monitored changes, ASP.NET invalidates the cache and repopulate the GridView control with fresh data, updating the time displayed in the Label control. Creating the Sample ApplicationFollow these steps to create and run the sample application:
Testing the ApplicationThe application caches the data displayed on the Web form and refreshes it every three minutes if there is no activity. If a change occurs to the database, the cache is refreshed immediately. Run the application from Visual Studio, which loads the page into the browser. The cache refresh time displayed indicates when the cache was last refreshed. Wait three minutes, and then refresh the page, causing a postback event to occur. Note that the time displayed on the page has changed. If you refresh the page in less than three minutes, the time displayed on the page will remain the same. Now update the data in the database, using a Transact-SQL UPDATE command and refresh the page. The time displayed now indicates that the cache was refreshed with the new data from the database. Note that although the cache is updated, the time displayed on the page does not change until a postback event occurs. See alsoDetecting Changes with SqlDependencyA SqlDependency object can be associated with a SqlCommand in order to detect when query results differ from those originally retrieved. You can also assign a delegate to the OnChange event, which will fire when the results change for an associated command. You must associate the SqlDependency with the command before you execute the command. The HasChanges property of the SqlDependency can also be used to determine if the query results have changed since the data was first retrieved. Security ConsiderationsThe dependency infrastructure relies on a SqlConnection that is opened when Start is called in order to receive notifications that the underlying data has changed for a given command. The ability for a client to initiate the call to SqlDependency.Start is controlled through the use of SqlClientPermission and code access security attributes. For more information, see Enabling Query Notifications and Code Access Security and ADO.NET. ExampleThe following steps illustrate how to declare a dependency, execute a command, and receive a notification when the result set changes:
If any user subsequently changes the underlying data, Microsoft SQL Server detects that there is a notification pending for such a change, and posts a notification that is processed and forwarded to the client through the underlying SqlConnection that was created by calling SqlDependency.Start. The client listener receives the invalidation message. The client listener then locates the associated SqlDependency object and fires the OnChange event. The following code fragment shows the design pattern you would use to create a sample application. C#void Initialization() { // Create a dependency connection. SqlDependency.Start(connectionString, queueName); } void SomeMethod() { // Assume connection is an open SqlConnection. // Create a new SqlCommand object. using (SqlCommand command=new SqlCommand( "SELECT ShipperID, CompanyName, Phone FROM dbo.Shippers", connection)) { // Create a dependency and associate it with the SqlCommand. SqlDependency dependency=new SqlDependency(command); // Maintain the reference in a class member. // Subscribe to the SqlDependency event. dependency.OnChange+=new OnChangeEventHandler(OnDependencyChange); // Execute the command. using (SqlDataReader reader = command.ExecuteReader()) { // Process the DataReader. } } } // Handler method void OnDependencyChange(object sender, SqlNotificationEventArgs e ) { // Handle the event (for example, invalidate this cache entry). } void Termination() { // Release the dependency. SqlDependency.Stop(connectionString, queueName); } See alsoSqlCommand Execution with a SqlNotificationRequestA SqlCommand can be configured to generate a notification when data changes after it has been fetched from the server and the result set would be different if the query were executed again. This is useful for scenarios where you want to use custom notification queues on the server or when you do not want to maintain live objects. Creating the Notification RequestYou can use a SqlNotificationRequest object to create the notification request by binding it to a SqlCommand object. Once the request is created, you no longer need the SqlNotificationRequest object. You can query the queue for any notifications and respond appropriately. Notifications can occur even if the application is shut down and subsequently restarted. When the command with the associated notification is executed, any changes to the original result set trigger sending a message to the SQL Server queue that was configured in the notification request. How you poll the SQL Server queue and interpret the message is specific to your application. The application is responsible for polling the queue and reacting based on the contents of the message. Note When using SQL Server notification requests with SqlDependency, create your own queue name instead of using the default service name. There are no new client-side security elements for SqlNotificationRequest. This is primarily a server feature, and the server has created special privileges that users must have to request a notification. ExampleThe following code fragment demonstrates how to create a SqlNotificationRequest and associate it with a SqlCommand. C#// Assume connection is an open SqlConnection. // Create a new SqlCommand object. SqlCommand command=new SqlCommand( "SELECT ShipperID, CompanyName, Phone FROM dbo.Shippers", connection); // Create a SqlNotificationRequest object. SqlNotificationRequest notificationRequest=new SqlNotificationRequest(); notificationRequest.id="NotificationID"; notificationRequest.Service="mySSBQueue"; // Associate the notification request with the command. command.Notification=notificationRequest; // Execute the command. command.ExecuteReader(); // Process the DataReader. // You can use Transact-SQL syntax to periodically poll the // SQL Server queue to see if you have a new message. See alsoSnapshot Isolation in SQL ServerSnapshot isolation enhances concurrency for OLTP applications. Understanding Snapshot Isolation and Row VersioningOnce snapshot isolation is enabled, updated row versions for each transaction are maintained in tempdb. A unique transaction sequence number identifies each transaction, and these unique numbers are recorded for each row version. The transaction works with the most recent row versions having a sequence number before the sequence number of the transaction. Newer row versions created after the transaction has begun are ignored by the transaction. The term "snapshot" reflects the fact that all queries in the transaction see the same version, or snapshot, of the database, based on the state of the database at the moment in time when the transaction begins. No locks are acquired on the underlying data rows or data pages in a snapshot transaction, which permits other transactions to execute without being blocked by a prior uncompleted transaction. Transactions that modify data do not block transactions that read data, and transactions that read data do not block transactions that write data, as they normally would under the default READ COMMITTED isolation level in SQL Server. This non-blocking behavior also significantly reduces the likelihood of deadlocks for complex transactions. Snapshot isolation uses an optimistic concurrency model. If a snapshot transaction attempts to commit modifications to data that has changed since the transaction began, the transaction will roll back and an error will be raised. You can avoid this by using UPDLOCK hints for SELECT statements that access data to be modified. See "Locking Hints" in SQL Server Books Online for more information. Snapshot isolation must be enabled by setting the ALLOW_SNAPSHOT_ISOLATION ON database option before it is used in transactions. This activates the mechanism for storing row versions in the temporary database (tempdb). You must enable snapshot isolation in each database that uses it with the Transact-SQL ALTER DATABASE statement. In this respect, snapshot isolation differs from the traditional isolation levels of READ COMMITTED, REPEATABLE READ, SERIALIZABLE, and READ UNCOMMITTED, which require no configuration. The following statements activate snapshot isolation and replace the default READ COMMITTED behavior with SNAPSHOT: SQLALTER DATABASE MyDatabase SET ALLOW_SNAPSHOT_ISOLATION ON ALTER DATABASE MyDatabase SET READ_COMMITTED_SNAPSHOT ON Setting the READ_COMMITTED_SNAPSHOT ON option allows access to versioned rows under the default READ COMMITTED isolation level. If the READ_COMMITTED_SNAPSHOT option is set to OFF, you must explicitly set the Snapshot isolation level for each session in order to access versioned rows. Managing Concurrency with Isolation LevelsThe isolation level under which a Transact-SQL statement executes determines its locking and row versioning behavior. An isolation level has connection-wide scope, and once set for a connection with the SET TRANSACTION ISOLATION LEVEL statement, it remains in effect until the connection is closed or another isolation level is set. When a connection is closed and returned to the pool, the isolation level from the last SET TRANSACTION ISOLATION LEVEL statement is retained. Subsequent connections reusing a pooled connection use the isolation level that was in effect at the time the connection is pooled. Individual queries issued within a connection can contain lock hints that modify the isolation for a single statement or transaction but do not affect the isolation level of the connection. Isolation levels or lock hints set in stored procedures or functions do not change the isolation level of the connection that calls them and are in effect only for the duration of the stored procedure or function call. Four isolation levels defined in the SQL-92 standard were supported in early versions of SQL Server:
For more information, refer to the Transaction Locking and Row Versioning Guide. Snapshot Isolation Level ExtensionsSQL Server introduced extensions to the SQL-92 isolation levels with the introduction of the SNAPSHOT isolation level and an additional implementation of READ COMMITTED. The READ_COMMITTED_SNAPSHOT isolation level can transparently replace READ COMMITTED for all transactions.
How Snapshot Isolation and Row Versioning WorkWhen the SNAPSHOT isolation level is enabled, each time a row is updated, the SQL Server Database Engine stores a copy of the original row in tempdb, and adds a transaction sequence number to the row. The following is the sequence of events that occurs:
The net effect of snapshot isolation is that the transaction sees all of the data as it existed at the start of the transaction, without honoring or placing any locks on the underlying tables. This can result in performance improvements in situations where there is contention. A snapshot transaction always uses optimistic concurrency control, withholding any locks that would prevent other transactions from updating rows. If a snapshot transaction attempts to commit an update to a row that was changed after the transaction began, the transaction is rolled back, and an error is raised. Working with Snapshot Isolation in ADO.NETSnapshot isolation is supported in ADO.NET by the SqlTransaction class. If a database has been enabled for snapshot isolation but is not configured for READ_COMMITTED_SNAPSHOT ON, you must initiate a SqlTransaction using the IsolationLevel.Snapshot enumeration value when calling the BeginTransaction method. This code fragment assumes that connection is an open SqlConnection object. C#SqlTransaction sqlTran = connection.BeginTransaction(IsolationLevel.Snapshot); ExampleThe following example demonstrates how the different isolation levels behave by attempting to access locked data, and it is not intended to be used in production code. The code connects to the AdventureWorks sample database in SQL Server and creates a table named TestSnapshot and inserts one row of data. The code uses the ALTER DATABASE Transact-SQL statement to turn on snapshot isolation for the database, but it does not set the READ_COMMITTED_SNAPSHOT option, leaving the default READ COMMITTED isolation-level behavior in effect. The code then performs the following actions:
Note The following examples use the same connection string with connection pooling turned off. If a connection is pooled, resetting its isolation level does not reset the isolation level at the server. As a result, subsequent connections that use the same pooled inner connection start with their isolation levels set to that of the pooled connection. An alternative to turning off connection pooling is to set the isolation level explicitly for each connection. C#// Assumes GetConnectionString returns a valid connection string // where pooling is turned off by setting Pooling=False;. string connectionString = GetConnectionString(); using (SqlConnection connection1 = new SqlConnection(connectionString)) { // Drop the TestSnapshot table if it exists connection1.Open(); SqlCommand command1 = connection1.CreateCommand(); command1.CommandText = "IF EXISTS " + "(SELECT * FROM sys.tables WHERE name=N'TestSnapshot') " + "DROP TABLE TestSnapshot"; try { command1.ExecuteNonQuery(); } catch (Exception ex) { Console.WriteLine(ex.Message); } // Enable Snapshot isolation command1.CommandText = "ALTER DATABASE AdventureWorks SET ALLOW_SNAPSHOT_ISOLATION ON"; command1.ExecuteNonQuery(); // Create a table named TestSnapshot and insert one row of data command1.CommandText = "CREATE TABLE TestSnapshot (ID int primary key, valueCol int)"; command1.ExecuteNonQuery(); command1.CommandText = "INSERT INTO TestSnapshot VALUES (1,1)"; command1.ExecuteNonQuery(); // Begin, but do not complete, a transaction to update the data // with the Serializable isolation level, which locks the table // pending the commit or rollback of the update. The original // value in valueCol was 1, the proposed new value is 22. SqlTransaction transaction1 = connection1.BeginTransaction(IsolationLevel.Serializable); command1.Transaction = transaction1; command1.CommandText = "UPDATE TestSnapshot SET valueCol=22 WHERE ID=1"; command1.ExecuteNonQuery(); // Open a second connection to AdventureWorks using (SqlConnection connection2 = new SqlConnection(connectionString)) { connection2.Open(); // Initiate a second transaction to read from TestSnapshot // using Snapshot isolation. This will read the original // value of 1 since transaction1 has not yet committed. SqlCommand command2 = connection2.CreateCommand(); SqlTransaction transaction2 = connection2.BeginTransaction(IsolationLevel.Snapshot); command2.Transaction = transaction2; command2.CommandText = "SELECT ID, valueCol FROM TestSnapshot"; SqlDataReader reader2 = command2.ExecuteReader(); while (reader2.Read()) { Console.WriteLine("Expected 1,1 Actual " + reader2.GetValue(0).ToString() + "," + reader2.GetValue(1).ToString()); } transaction2.Commit(); } // Open a third connection to AdventureWorks and // initiate a third transaction to read from TestSnapshot // using ReadCommitted isolation level. This transaction // will not be able to view the data because of // the locks placed on the table in transaction1 // and will time out after 4 seconds. // You would see the same behavior with the // RepeatableRead or Serializable isolation levels. using (SqlConnection connection3 = new SqlConnection(connectionString)) { connection3.Open(); SqlCommand command3 = connection3.CreateCommand(); SqlTransaction transaction3 = connection3.BeginTransaction(IsolationLevel.ReadCommitted); command3.Transaction = transaction3; command3.CommandText = "SELECT ID, valueCol FROM TestSnapshot"; command3.CommandTimeout = 4; try { SqlDataReader sqldatareader3 = command3.ExecuteReader(); while (sqldatareader3.Read()) { Console.WriteLine("You should never hit this."); } transaction3.Commit(); } catch (Exception ex) { Console.WriteLine("Expected timeout expired exception: " + ex.Message); transaction3.Rollback(); } } // Open a fourth connection to AdventureWorks and // initiate a fourth transaction to read from TestSnapshot // using the ReadUncommitted isolation level. ReadUncommitted // will not hit the table lock, and will allow a dirty read // of the proposed new value 22 for valueCol. If the first // transaction rolls back, this value will never actually have // existed in the database. using (SqlConnection connection4 = new SqlConnection(connectionString)) { connection4.Open(); SqlCommand command4 = connection4.CreateCommand(); SqlTransaction transaction4 = connection4.BeginTransaction(IsolationLevel.ReadUncommitted); command4.Transaction = transaction4; command4.CommandText = "SELECT ID, valueCol FROM TestSnapshot"; SqlDataReader reader4 = command4.ExecuteReader(); while (reader4.Read()) { Console.WriteLine("Expected 1,22 Actual " + reader4.GetValue(0).ToString() + "," + reader4.GetValue(1).ToString()); } transaction4.Commit(); } // Roll back the first transaction transaction1.Rollback(); } // CLEANUP // Delete the TestSnapshot table and set // ALLOW_SNAPSHOT_ISOLATION OFF using (SqlConnection connection5 = new SqlConnection(connectionString)) { connection5.Open(); SqlCommand command5 = connection5.CreateCommand(); command5.CommandText = "DROP TABLE TestSnapshot"; SqlCommand command6 = connection5.CreateCommand(); command6.CommandText = "ALTER DATABASE AdventureWorks SET ALLOW_SNAPSHOT_ISOLATION OFF"; try { command5.ExecuteNonQuery(); command6.ExecuteNonQuery(); } catch (Exception ex) { Console.WriteLine(ex.Message); } } Console.WriteLine("Done!"); ExampleThe following example demonstrates the behavior of snapshot isolation when data is being modified. The code performs the following actions:
// Assumes GetConnectionString returns a valid connection string // where pooling is turned off by setting Pooling=False;. string connectionString = GetConnectionString(); using (SqlConnection connection1 = new SqlConnection(connectionString)) { connection1.Open(); SqlCommand command1 = connection1.CreateCommand(); // Enable Snapshot isolation in AdventureWorks command1.CommandText = "ALTER DATABASE AdventureWorks SET ALLOW_SNAPSHOT_ISOLATION ON"; try { command1.ExecuteNonQuery(); Console.WriteLine( "Snapshot Isolation turned on in AdventureWorks."); } catch (Exception ex) { Console.WriteLine("ALLOW_SNAPSHOT_ISOLATION ON failed: {0}", ex.Message); } // Create a table command1.CommandText = "IF EXISTS " + "(SELECT * FROM sys.tables " + "WHERE name=N'TestSnapshotUpdate')" + " DROP TABLE TestSnapshotUpdate"; command1.ExecuteNonQuery(); command1.CommandText = "CREATE TABLE TestSnapshotUpdate " + "(ID int primary key, CharCol nvarchar(100));"; try { command1.ExecuteNonQuery(); Console.WriteLine("TestSnapshotUpdate table created."); } catch (Exception ex) { Console.WriteLine("CREATE TABLE failed: {0}", ex.Message); } // Insert some data command1.CommandText = "INSERT INTO TestSnapshotUpdate VALUES (1,N'abcdefg');" + "INSERT INTO TestSnapshotUpdate VALUES (2,N'hijklmn');" + "INSERT INTO TestSnapshotUpdate VALUES (3,N'opqrstuv');"; try { command1.ExecuteNonQuery(); Console.WriteLine("Data inserted TestSnapshotUpdate table."); } catch (Exception ex) { Console.WriteLine(ex.Message); } // Begin, but do not complete, a transaction // using the Snapshot isolation level. SqlTransaction transaction1 = null; try { transaction1 = connection1.BeginTransaction(IsolationLevel.Snapshot); command1.CommandText = "SELECT * FROM TestSnapshotUpdate WHERE ID BETWEEN 1 AND 3"; command1.Transaction = transaction1; command1.ExecuteNonQuery(); Console.WriteLine("Snapshot transaction1 started."); // Open a second Connection/Transaction to update data // using ReadCommitted. This transaction should succeed. using (SqlConnection connection2 = new SqlConnection(connectionString)) { connection2.Open(); SqlCommand command2 = connection2.CreateCommand(); command2.CommandText = "UPDATE TestSnapshotUpdate SET CharCol=" + "N'New value from Connection2' WHERE ID=1"; SqlTransaction transaction2 = connection2.BeginTransaction(IsolationLevel.ReadCommitted); command2.Transaction = transaction2; try { command2.ExecuteNonQuery(); transaction2.Commit(); Console.WriteLine( "transaction2 has modified data and committed."); } catch (SqlException ex) { Console.WriteLine(ex.Message); transaction2.Rollback(); } finally { transaction2.Dispose(); } } // Now try to update a row in Connection1/Transaction1. // This transaction should fail because Transaction2 // succeeded in modifying the data. command1.CommandText = "UPDATE TestSnapshotUpdate SET CharCol=" + "N'New value from Connection1' WHERE ID=1"; command1.Transaction = transaction1; command1.ExecuteNonQuery(); transaction1.Commit(); Console.WriteLine("You should never see this."); } catch (SqlException ex) { Console.WriteLine("Expected failure for transaction1:"); Console.WriteLine(" {0}: {1}", ex.Number, ex.Message); } finally { transaction1.Dispose(); } } // CLEANUP: // Turn off Snapshot isolation and delete the table using (SqlConnection connection3 = new SqlConnection(connectionString)) { connection3.Open(); SqlCommand command3 = connection3.CreateCommand(); command3.CommandText = "ALTER DATABASE AdventureWorks SET ALLOW_SNAPSHOT_ISOLATION OFF"; try { command3.ExecuteNonQuery(); Console.WriteLine( "CLEANUP: Snapshot isolation turned off in AdventureWorks."); } catch (Exception ex) { Console.WriteLine("CLEANUP FAILED: {0}", ex.Message); } command3.CommandText = "DROP TABLE TestSnapshotUpdate"; try { command3.ExecuteNonQuery(); Console.WriteLine("CLEANUP: TestSnapshotUpdate table deleted."); } catch (Exception ex) { Console.WriteLine("CLEANUP FAILED: {0}", ex.Message); } } Using Lock Hints with Snapshot IsolationIn the previous example, the first transaction selects data, and a second transaction updates the data before the first transaction is able to complete, causing an update conflict when the first transaction tries to update the same row. You can reduce the chance of update conflicts in long-running snapshot transactions by supplying lock hints at the beginning of the transaction. The following SELECT statement uses the UPDLOCK hint to lock the selected rows: SQLSELECT * FROM TestSnapshotUpdate WITH (UPDLOCK) WHERE PriKey BETWEEN 1 AND 3 Using the UPDLOCK lock hint blocks any rows attempting to update the rows before the first transaction completes. This guarantees that the selected rows have no conflicts when they are updated later in the transaction. See "Locking Hints" in SQL Server Books Online. If your application has many conflicts, snapshot isolation may not be the best choice. Hints should only be used when really needed. Your application should not be designed so that it constantly relies on lock hints for its operation. See also
SqlClient Support for High Availability, Disaster RecoveryThis topic discusses SqlClient support (added in .NET Framework 4.5) for high-availability, disaster recovery -- AlwaysOn Availability Groups. AlwaysOn Availability Groups feature was added to SQL Server 2012. For more information about AlwaysOn Availability Groups, see SQL Server Books Online. You can now specify the availability group listener of a (high-availability, disaster-recovery) availability group (AG) or SQL Server 2012 Failover Cluster Instance in the connection property. If a SqlClient application is connected to an AlwaysOn database that fails over, the original connection is broken and the application must open a new connection to continue work after the failover. If you are not connecting to an availability group listener or SQL Server 2012 Failover Cluster Instance, and if multiple IP addresses are associated with a hostname, SqlClient will iterate sequentially through all IP addresses associated with DNS entry. This can be time consuming if the first IP address returned by DNS server is not bound to any network interface card (NIC). When connecting to an availability group listener or SQL Server 2012 Failover Cluster Instance, SqlClient attempts to establish connections to all IP addresses in parallel and if a connection attempt succeeds, the driver will discard any pending connection attempts. Note Increasing connection timeout and implementing connection retry logic will increase the probability that an application will connect to an availability group. Also, because a connection can fail because of a failover, you should implement connection retry logic, retrying a failed connection until it reconnects. The following connection properties were added to SqlClient in .NET Framework 4.5:
You can programmatically modify these connection string keywords with: Note Setting MultiSubnetFailover to true isn't required with .NET Framework 4.6.1 or later versions. Connecting With MultiSubnetFailoverAlways specify MultiSubnetFailover=True when connecting to a SQL Server 2012 availability group listener or SQL Server 2012 Failover Cluster Instance. MultiSubnetFailover enables faster failover for all Availability Groups and or Failover Cluster Instance in SQL Server 2012 and will significantly reduce failover time for single and multi-subnet AlwaysOn topologies. During a multi-subnet failover, the client will attempt connections in parallel. During a subnet failover, will aggressively retry the TCP connection. The MultiSubnetFailover connection property indicates that the application is being deployed in an availability group or SQL Server 2012 Failover Cluster Instance and that SqlClient will try to connect to the database on the primary SQL Server instance by trying to connect to all the IP addresses. When MultiSubnetFailover=True is specified for a connection, the client retries TCP connection attempts faster than the operating system’s default TCP retransmit intervals. This enables faster reconnection after failover of either an AlwaysOn Availability Group or an AlwaysOn Failover Cluster Instance, and is applicable to both single- and multi-subnet Availability Groups and Failover Cluster Instances. For more information about connection string keywords in SqlClient, see ConnectionString. Specifying MultiSubnetFailover=True when connecting to something other than a availability group listener or SQL Server 2012 Failover Cluster Instance may result in a negative performance impact, and is not supported. Use the following guidelines to connect to a server in an availability group or SQL Server 2012 Failover Cluster Instance:
If read-only routing is not in effect, connecting to a secondary replica location will fail in the following situations:
SqlDependency is not supported on read-only secondary replicas. A connection will fail if a primary replica is configured to reject read-only workloads and the connection string contains ApplicationIntent=ReadOnly. Upgrading to Use Multi-Subnet Clusters from Database MirroringA connection error (ArgumentException) will occur if the MultiSubnetFailover and Failover Partner connection keywords are present in the connection string, or if MultiSubnetFailover=True and a protocol other than TCP is used. An error (SqlException) will also occur if MultiSubnetFailover is used and the SQL Server returns a failover partner response indicating it is part of a database mirroring pair. If you upgrade a SqlClient application that currently uses database mirroring to a multi-subnet scenario, you should remove the Failover Partner connection property and replace it with MultiSubnetFailover set to True and replace the server name in the connection string with an availability group listener. If a connection string uses Failover Partner and MultiSubnetFailover=True, the driver will generate an error. However, if a connection string uses Failover Partner and MultiSubnetFailover=False (or ApplicationIntent=ReadWrite), the application will use database mirroring. The driver will return an error if database mirroring is used on the primary database in the AG, and if MultiSubnetFailover=True is used in the connection string that connects to a primary database instead of to an availability group listener. Specifying Application IntentWhen ApplicationIntent=ReadOnly, the client requests a read workload when connecting to an AlwaysOn enabled database. The server will enforce the intent at connection time and during a USE database statement but only to an Always On enabled database. The ApplicationIntent keyword does not work with legacy, read-only databases. A database can allow or disallow read workloads on the targeted AlwaysOn database. (This is done with the ALLOW_CONNECTIONS clause of the PRIMARY_ROLE and SECONDARY_ROLETransact-SQL statements.) The ApplicationIntent keyword is used to enable read-only routing. Read-Only RoutingRead-only routing is a feature that can ensure the availability of a read only replica of a database. To enable read-only routing:
It is possible that multiple connections using read-only routing will not all connect to the same read-only replica. Changes in database synchronization or changes in the server's routing configuration can result in client connections to different read-only replicas. To ensure that all read-only requests connect to the same read-only replica, do not pass an availability group listener to the Data Source connection string keyword. Instead, specify the name of the read-only instance. Read-only routing may take longer than connecting to the primary because read only routing first connects to the primary and then looks for the best available readable secondary. Because of this, you should increase your login timeout. See alsoSqlClient Support for LocalDBBeginning in SQL Server code name Denali, a lightweight version of SQL Server, called LocalDB, will be available. This topic discusses how to connect to a LocalDB database. RemarksFor more information about LocalDB, including how to install LocalDB and configure your LocalDB instance, see SQL Server Books Online. To summarize what you can do with LocalDB:
User Instance=True is not allowed when connecting to a LocalDB database. You can download LocalDB from Microsoft SQL Server 2012 Feature Pack. If you will use sqlcmd.exe to modify data in your LocalDB instance, you will need sqlcmd from SQL Server 2012, which you can also get from the SQL Server 2012 Feature Pack. Programmatically Create a Named InstanceAn application can create a named instance and specify a database as follows:
See alsoLINQ to SQLLINQ to SQL is a component of .NET Framework version 3.5 that provides a run-time infrastructure for managing relational data as objects. Note Relational data appears as a collection of two-dimensional tables (relations or flat files), where common columns relate tables to each other. To use LINQ to SQL effectively, you must have some familiarity with the underlying principles of relational databases. In LINQ to SQL, the data model of a relational database is mapped to an object model expressed in the programming language of the developer. When the application runs, LINQ to SQL translates into SQL the language-integrated queries in the object model and sends them to the database for execution. When the database returns the results, LINQ to SQL translates them back to objects that you can work with in your own programming language. Developers using Visual Studio typically use the Object Relational Designer, which provides a user interface for implementing many of the features of LINQ to SQL. The documentation that is included with this release of LINQ to SQL describes the basic building blocks, processes, and techniques you need for building LINQ to SQL applications. You can also search Microsoft Docs for specific issues, and you can participate in the LINQ Forum, where you can discuss more complex topics in detail with experts. Finally, the LINQ to SQL: .NET Language-Integrated Query for Relational Data white paper details LINQ to SQL technology, complete with Visual Basic and C# code examples. In This Section
Getting Started
Programming Guide
Reference
Samples Related Sections
Language-Integrated Query (LINQ) - C#
Language-Integrated Query (LINQ) - Visual Basic
LINQ
LINQ and ADO.NET
LINQ to SQL Walkthroughs
Downloading Sample Databases
LinqDataSource Web Server Control Overview Getting StartedBy using LINQ to SQL, you can use the LINQ technology to access SQL databases just as you would access an in-memory collection. For example, the nw object in the following code is created to represent the Northwind database, the Customers table is targeted, the rows are filtered for Customers from London, and a string for CompanyName is selected for retrieval. When the loop is executed, the collection of CompanyName values is retrieved. C#// Northwnd inherits from System.Data.Linq.DataContext. Northwnd nw = new Northwnd(@"northwnd.mdf"); // or, if you are not using SQL Server Express // Northwnd nw = new Northwnd("Database=Northwind;Server=server_name;Integrated Security=SSPI"); var companyNameQuery = from cust in nw.Customers where cust.City == "London" select cust.CompanyName; foreach (var customer in companyNameQuery) { Console.WriteLine(customer); } Next StepsFor some additional examples, including inserting and updating, see What You Can Do With LINQ to SQL. Next, try some walkthroughs and tutorials to have a hands-on experience of using LINQ to SQL. See Learning by Walkthroughs. Finally, learn how to get started on your own LINQ to SQL project by reading Typical Steps for Using LINQ to SQL. See also
What You Can Do With LINQ to SQLLINQ to SQL supports all the key capabilities you would expect as a SQL developer. You can query for information, and insert, update, and delete information from tables. SelectingSelecting (projection) is achieved by just writing a LINQ query in your own programming language, and then executing that query to retrieve the results. LINQ to SQL itself translates all the necessary operations into the necessary SQL operations that you are familiar with. For more information, see LINQ to SQL. In the following example, the company names of customers from London are retrieved and displayed in the console window. C#// Northwnd inherits from System.Data.Linq.DataContext. Northwnd nw = new Northwnd(@"northwnd.mdf"); // or, if you are not using SQL Server Express // Northwnd nw = new Northwnd("Database=Northwind;Server=server_name;Integrated Security=SSPI"); var companyNameQuery = from cust in nw.Customers where cust.City == "London" select cust.CompanyName; foreach (var customer in companyNameQuery) { Console.WriteLine(customer); } InsertingTo execute a SQL Insert, just add objects to the object model you have created, and call SubmitChanges on the DataContext. In the following example, a new customer and information about the customer is added to the Customers table by using InsertOnSubmit. C#// Northwnd inherits from System.Data.Linq.DataContext. Northwnd nw = new Northwnd(@"northwnd.mdf"); Customer cust = new Customer(); cust.CompanyName = "SomeCompany"; cust.City = "London"; cust.CustomerID = "98128"; cust.PostalCode = "55555"; cust.Phone = "555-555-5555"; nw.Customers.InsertOnSubmit(cust); // At this point, the new Customer object is added in the object model. // In LINQ to SQL, the change is not sent to the database until // SubmitChanges is called. nw.SubmitChanges(); UpdatingTo Update a database entry, first retrieve the item and edit it directly in the object model. After you have modified the object, call SubmitChanges on the DataContext to update the database. In the following example, all customers who are from London are retrieved. Then the name of the city is changed from "London" to "London - Metro". Finally, SubmitChanges is called to send the changes to the database. C#Northwnd nw = new Northwnd(@"northwnd.mdf"); var cityNameQuery = from cust in nw.Customers where cust.City.Contains("London") select cust; foreach (var customer in cityNameQuery) { if (customer.City == "London") { customer.City = "London - Metro"; } } nw.SubmitChanges(); DeletingTo Delete an item, remove the item from the collection to which it belongs, and then call SubmitChanges on the DataContext to commit the change. Note LINQ to SQL does not recognize cascade-delete operations. If you want to delete a row in a table that has constraints against it, see How to: Delete Rows From the Database. In the following example, the customer who has CustomerID of 98128 is retrieved from the database. Then, after confirming that the customer row was retrieved, DeleteOnSubmit is called to remove that object from the collection. Finally, SubmitChanges is called to forward the deletion to the database. C#Northwnd nw = new Northwnd(@"northwnd.mdf"); var deleteIndivCust = from cust in nw.Customers where cust.CustomerID == "98128" select cust; if (deleteIndivCust.Count() > 0) { nw.Customers.DeleteOnSubmit(deleteIndivCust.First()); nw.SubmitChanges(); } See alsoTypical Steps for Using LINQ to SQLTo implement a LINQ to SQL application, you follow the steps described later in this topic. Note that many steps are optional. It is very possible that you can use your object model in its default state. For a really fast start, use the Object Relational Designer to create your object model and start coding your queries. Creating the Object ModelThe first step is to create an object model from the metadata of an existing relational database. The object model represents the database according to the programming language of the developer. For more information, see The LINQ to SQL Object Model. 1. Select a tool to create the model.Three tools are available for creating the model.
2. Select the kind of code you want to generate.
3. Refine the code file to reflect the needs of your application.For this purpose, you can use either the O/R Designer or the code editor. Using the Object ModelThe following illustration shows the relationship between the developer and the data in a two-tier scenario. For other scenarios, see N-Tier and Remote Applications with LINQ to SQL.
Now that you have the object model, you describe information requests and manipulate data within that model. You think in terms of the objects and properties in your object model and not in terms of the rows and columns of the database. You do not deal directly with the database. When you instruct LINQ to SQL to either execute a query that you have described or call SubmitChanges() on data that you have manipulated, LINQ to SQL communicates with the database in the language of the database. The following represents typical steps for using the object model that you have created. 1. Create queries to retrieve information from the database.For more information, see Query Concepts and Query Examples. 2. Override default behaviors for Insert, Update, and Delete.This step is optional. For more information, see Customizing Insert, Update, and Delete Operations. 3. Set appropriate options to detect and report concurrency conflicts.You can leave your model with its default values for handling concurrency conflicts, or you can change it to suit your purposes. For more information, see How to: Specify Which Members are Tested for Concurrency Conflicts and How to: Specify When Concurrency Exceptions are Thrown. 4. Establish an inheritance hierarchy.This step is optional. For more information, see Inheritance Support. 5. Provide an appropriate user interface.This step is optional, and depends on how your application will be used. 6. Debug and test your application.For more information, see Debugging Support. See alsoGet the sample databases for ADO.NET code samplesA number of examples and walkthroughs in the LINQ to SQL documentation use sample SQL Server databases and SQL Server Express. You can download these products free of charge from Microsoft. Get the Northwind sample database for SQL ServerDownload the script instnwnd.sql from the following GitHub repository to create and load the Northwind sample database for SQL Server: Northwind and pubs sample databases for Microsoft SQL Server Before you can use the Northwind database, you have to run the downloaded instnwnd.sql script file to recreate the database on an instance of SQL Server by using SQL Server Management Studio or a similar tool. Follow the instructions in the Readme file in the repository. Tip If you're looking for the Northwind database for Microsoft Access, see Install the Northwind sample database for Microsoft Access. Get the Northwind sample database for Microsoft AccessThe Northwind sample database for Microsoft Access is not available on the Microsoft Download Center. To install Northwind directly from within Access, do the following things:
Get the AdventureWorks sample database for SQL ServerDownload the AdventureWorks sample database for SQL Server from the following GitHub repository: AdventureWorks sample databases After you download one of the database backup (*.bak) files, restore the backup to an instance of SQL Server by using SQL Server Management Studio (SSMS). See Get SQL Server Management Studio. Get SQL Server Management StudioIf you want to view or modify a database that you've downloaded, you can use SQL Server Management Studio (SSMS). Download SSMS from the following page: Download SQL Server Management Studio (SSMS) You can also view and manage databases in the Visual Studio integrated development environment (IDE). In Visual Studio, connect to the database from SQL Server Object Explorer, or create a Data Connection to the database in Server Explorer. Open these explorer panes from the View menu. Get SQL Server ExpressSQL Server Express is a free, entry-level edition of SQL Server that you can redistribute with applications. Download SQL Server Express from the following page: If you're using Visual Studio, SQL Server Express LocalDB is included in the free Community edition of Visual Studio, as well as the Professional and higher editions. See alsoLearning by WalkthroughsThe LINQ to SQL documentation provides several walkthroughs. This topic addresses some general walkthrough issues (including troubleshooting), and provides links to several entry-level walkthroughs for learning about LINQ to SQL. Note The walkthroughs in this Getting Started section expose you to the basic code that supports LINQ to SQL technology. In actual practice, you will typically use the Object Relational Designer and Windows Forms projects to implement your LINQ to SQL applications. The O/R Designer documentation provides examples and walkthroughs for this purpose. Getting Started WalkthroughsSeveral walkthroughs are available in this section. These walkthroughs are based on the sample Northwind database, and present LINQ to SQL features at a gentle pace with minimal complexities. A typical progression to follow would be as follows:
GeneralThe following information pertains to these walkthroughs in general:
TroubleshootingRun-time errors can occur because you do not have sufficient permissions to access the databases used in these walkthroughs. See the following steps to help resolve the most common of these issues. Log-On IssuesYour application might be trying to access the database by way of a database logon it does not accept. To verify or change the database log on
ProtocolsAt times, protocols might not be set correctly for your application to access the database. For example, the Named Pipes protocol, which is required for walkthroughs in LINQ to SQL, is not enabled by default. To enable the Named Pipes protocol
Stopping and Restarting the ServiceYou must stop and restart services before your changes can take effect. To stop and restart the service
See alsoWalkthrough: Simple Object Model and Query (Visual Basic)This walkthrough provides a fundamental end-to-end LINQ to SQL scenario with minimal complexities. You will create an entity class that models the Customers table in the sample Northwind database. You will then create a simple query to list customers who are located in London. This walkthrough is code-oriented by design to help show LINQ to SQL concepts. Normally speaking, you would use the Object Relational Designer to create your object model. Note Your computer might show different names or locations for some of the Visual Studio user interface elements in the following instructions. The Visual Studio edition that you have and the settings that you use determine these elements. For more information, see Personalizing the IDE. This walkthrough was written by using Visual Basic Development Settings. Prerequisites
OverviewThis walkthrough consists of six main tasks:
Creating a LINQ to SQL SolutionIn this first task, you create a Visual Studio solution that contains the necessary references to build and run a LINQ to SQL project. To create a LINQ to SQL solution
Adding LINQ References and DirectivesThis walkthrough uses assemblies that might not be installed by default in your project. If System.Data.Linq is not listed as a reference in your project (click Show All Files in Solution Explorer and expand the References node), add it, as explained in the following steps. To add System.Data.Linq
Mapping a Class to a Database TableIn this step, you create a class and map it to a database table. Such a class is termed an entity class. Note that the mapping is accomplished by just adding the TableAttribute attribute. The Name property specifies the name of the table in the database. To create an entity class and map it to a database table
Designating Properties on the Class to Represent Database ColumnsIn this step, you accomplish several tasks.
To represent characteristics of two database columns
Specifying the Connection to the Northwind DatabaseIn this step you use a DataContext object to establish a connection between your code-based data structures and the database itself. The DataContext is the main channel through which you retrieve objects from the database and submit changes. You also declare a Table(Of Customer) to act as the logical, typed table for your queries against the Customers table in the database. You will create and execute these queries in later steps. To specify the database connection
Creating a Simple QueryIn this step, you create a query to find which customers in the database Customers table are located in London. The query code in this step just describes the query. It does not execute it. This approach is known as deferred execution. For more information, see Introduction to LINQ Queries (C#). You will also produce a log output to show the SQL commands that LINQ to SQL generates. This logging feature (which uses Log) is helpful in debugging, and in determining that the commands being sent to the database accurately represent your query. To create a simple query
Executing the QueryIn this step, you actually execute the query. The query expressions you created in the previous steps are not evaluated until the results are needed. When you begin the For Each iteration, a SQL command is executed against the database and objects are materialized. To execute the query
Next StepsThe Walkthrough: Querying Across Relationships (Visual Basic) topic continues where this walkthrough ends. The Querying Across Relationships walkthrough demonstrates how LINQ to SQL can query across tables, similar to joins in a relational database. If you want to do the Querying Across Relationships walkthrough, make sure to save the solution for the walkthrough you have just completed, which is a prerequisite. See alsoWalkthrough: Querying Across Relationships (Visual Basic)This walkthrough demonstrates the use of LINQ to SQL associations to represent foreign-key relationships in the database. Note Your computer might show different names or locations for some of the Visual Studio user interface elements in the following instructions. The Visual Studio edition that you have and the settings that you use determine these elements. For more information, see Personalizing the IDE. This walkthrough was written by using Visual Basic Development Settings. PrerequisitesYou must have completed Walkthrough: Simple Object Model and Query (Visual Basic). This walkthrough builds on that one, including the presence of the northwnd.mdf file in c:\linqtest. OverviewThis walkthrough consists of three main tasks:
Mapping Relationships across TablesAfter the Customer class definition, create the Order entity class definition that includes the following code, which indicates that Orders.Customer relates as a foreign key to Customers.CustomerID. To add the Order entity class
Annotating the Customer ClassIn this step, you annotate the Customer class to indicate its relationship to the Order class. (This addition is not strictly necessary, because defining the relationship in either direction is sufficient to create the link. But adding this annotation does enable you to easily navigate objects in either direction.) To annotate the Customer class
Creating and Running a Query across the Customer-Order RelationshipYou can now access Order objects directly from the Customer objects, or in the opposite order. You do not need an explicit join between customers and orders. To access Order objects by using Customer objects
Creating a Strongly Typed View of Your DatabaseIt is much easier to start with a strongly typed view of your database. By strongly typing the DataContext object, you do not need calls to GetTable. You can use strongly typed tables in all your queries when you use the strongly typed DataContext object. In the following steps, you will create Customers as a strongly typed table that maps to the Customers table in the database. To strongly type the DataContext object
Next StepsThe next walkthrough (Walkthrough: Manipulating Data (Visual Basic)) demonstrates how to manipulate data. That walkthrough does not require that you save the two walkthroughs in this series that you have already completed. See alsoWalkthrough: Manipulating Data (Visual Basic)This walkthrough provides a fundamental end-to-end LINQ to SQL scenario for adding, modifying, and deleting data in a database. You will use a copy of the sample Northwind database to add a customer, change the name of a customer, and delete an order. Note Your computer might show different names or locations for some of the Visual Studio user interface elements in the following instructions. The Visual Studio edition that you have and the settings that you use determine these elements. For more information, see Personalizing the IDE. This walkthrough was written by using Visual Basic Development Settings. PrerequisitesThis walkthrough requires the following:
OverviewThis walkthrough consists of six main tasks:
Creating a LINQ to SQL SolutionIn this first task, you create a Visual Studio solution that contains the necessary references to build and run a LINQ to SQL project. To create a LINQ to SQL solution
Adding LINQ References and DirectivesThis walkthrough uses assemblies that might not be installed by default in your project. If System.Data.Linq is not listed as a reference in your project (click Show All Files in Solution Explorer and expand the References node), add it, as explained in the following steps. To add System.Data.Linq
Adding the Northwind Code File to the ProjectThese steps assume that you have used the SQLMetal tool to generate a code file from the Northwind sample database. For more information, see the Prerequisites section earlier in this walkthrough. To add the northwind code file to the project
Setting Up the Database ConnectionFirst, test your connection to the database. Note especially that the name of the database, Northwnd, has no i character. If you generate errors in the next steps, review the northwind.vb file to determine how the Northwind partial class is spelled. To set up and test the database connection
Creating a New EntityCreating a new entity is straightforward. You can create objects (such as Customer) by using the New keyword. In this and the following sections, you are making changes only to the local cache. No changes are sent to the database until you call SubmitChanges toward the end of this walkthrough. To add a new Customer entity object
Updating an EntityIn the following steps, you will retrieve a Customer object and modify one of its properties. To change the name of a Customer
Deleting an EntityUsing the same customer object, you can delete the first order. The following code demonstrates how to sever relationships between rows, and how to delete a row from the database. To delete a row
Submitting Changes to the DatabaseThe final step required for creating, updating, and deleting objects is to actually submit the changes to the database. Without this step, your changes are only local and will not appear in query results. To submit changes to the database
Note After you have added the new customer by submitting the changes, you cannot execute this solution again as is, because you cannot add the same customer again as is. To execute the solution again, change the value of the customer ID to be added. See alsoWalkthrough: Using Only Stored Procedures (Visual Basic)This walkthrough provides a basic end-to-end LINQ to SQL scenario for accessing data by using stored procedures only. This approach is often used by database administrators to limit how the datastore is accessed. Note You can also use stored procedures in LINQ to SQL applications to override default behavior, especially for Create, Update, and Delete processes. For more information, see Customizing Insert, Update, and Delete Operations. For purposes of this walkthrough, you will use two methods that have been mapped to stored procedures in the Northwind sample database: CustOrdersDetail and CustOrderHist. The mapping occurs when you run the SqlMetal command-line tool to generate a Visual Basic file. For more information, see the Prerequisites section later in this walkthrough. This walkthrough does not rely on the Object Relational Designer. Developers using Visual Studio can also use the O/R Designer to implement stored procedure functionality. See LINQ to SQL Tools in Visual Studio. Note Your computer might show different names or locations for some of the Visual Studio user interface elements in the following instructions. The Visual Studio edition that you have and the settings that you use determine these elements. For more information, see Personalizing the IDE. This walkthrough was written by using Visual Basic Development Settings. PrerequisitesThis walkthrough requires the following:
OverviewThis walkthrough consists of six main tasks:
Creating a LINQ to SQL SolutionIn this first task, you create a Visual Studio solution that contains the necessary references to build and run a LINQ to SQL project. To create a LINQ to SQL solution
Adding the LINQ to SQL Assembly ReferenceThe LINQ to SQL assembly is not included in the standard Windows Forms Application template. You will have to add the assembly yourself, as explained in the following steps: To add System.Data.Linq.dll
Adding the Northwind Code File to the ProjectThis step assumes that you have used the SqlMetal tool to generate a code file from the Northwind sample database. For more information, see the Prerequisites section earlier in this walkthrough. To add the northwind code file to the project
Creating a Database ConnectionIn this step, you define the connection to the Northwind sample database. This walkthrough uses "c:\linqtest3\northwnd.mdf" as the path. To create the database connection
Setting up the User InterfaceIn this task you create an interface so that users can execute stored procedures to access data in the database. In the application that you are developing with this walkthrough, users can access data in the database only by using the stored procedures embedded in the application. To set up the user interface
To handle button clicks
Testing the ApplicationNow it is time to test your application. Note that your contact with the datastore is limited to whatever actions the two stored procedures can take. Those actions are to return the products included for any orderID you enter, or to return a history of products ordered for any CustomerID you enter. To test the application
Next StepsYou can enhance this project by making some changes. For example, you could list available stored procedures in a list box and have the user select which procedures to execute. You could also stream the output of the reports to a text file. See alsoWalkthrough: Simple Object Model and Query (C#)This walkthrough provides a fundamental end-to-end LINQ to SQL scenario with minimal complexities. You will create an entity class that models the Customers table in the sample Northwind database. You will then create a simple query to list customers who are located in London. This walkthrough is code-oriented by design to help show LINQ to SQL concepts. Normally speaking, you would use the Object Relational Designer to create your object model. Note Your computer might show different names or locations for some of the Visual Studio user interface elements in the following instructions. The Visual Studio edition that you have and the settings that you use determine these elements. For more information, see Personalizing the IDE. This walkthrough was written by using Visual C# Development Settings. Prerequisites
OverviewThis walkthrough consists of six main tasks:
Creating a LINQ to SQL SolutionIn this first task, you create a Visual Studio solution that contains the necessary references to build and run a LINQ to SQL project. To create a LINQ to SQL solution
Adding LINQ References and DirectivesThis walkthrough uses assemblies that might not be installed by default in your project. If System.Data.Linq is not listed as a reference in your project (expand the References node in Solution Explorer), add it, as explained in the following steps. To add System.Data.Linq
Mapping a Class to a Database TableIn this step, you create a class and map it to a database table. Such a class is termed an entity class. Note that the mapping is accomplished by just adding the TableAttribute attribute. The Name property specifies the name of the table in the database. To create an entity class and map it to a database table
Designating Properties on the Class to Represent Database ColumnsIn this step, you accomplish several tasks.
To represent characteristics of two database columns
Specifying the Connection to the Northwind DatabaseIn this step you use a DataContext object to establish a connection between your code-based data structures and the database itself. The DataContext is the main channel through which you retrieve objects from the database and submit changes. You also declare a Table<Customer> to act as the logical, typed table for your queries against the Customers table in the database. You will create and execute these queries in later steps. To specify the database connection
Creating a Simple QueryIn this step, you create a query to find which customers in the database Customers table are located in London. The query code in this step just describes the query. It does not execute it. This approach is known as deferred execution. For more information, see Introduction to LINQ Queries (C#). You will also produce a log output to show the SQL commands that LINQ to SQL generates. This logging feature (which uses Log) is helpful in debugging, and in determining that the commands being sent to the database accurately represent your query. To create a simple query
Executing the QueryIn this step, you actually execute the query. The query expressions you created in the previous steps are not evaluated until the results are needed. When you begin the foreach iteration, a SQL command is executed against the database and objects are materialized. To execute the query
Next StepsThe Walkthrough: Querying Across Relationships (C#) topic continues where this walkthrough ends. The Query Across Relationships walkthrough demonstrates how LINQ to SQL can query across tables, similar to joins in a relational database. If you want to do the Query Across Relationships walkthrough, make sure to save the solution for the walkthrough you have just completed, which is a prerequisite. See alsoWalkthrough: Querying Across Relationships (C#)This walkthrough demonstrates the use of LINQ to SQL associations to represent foreign-key relationships in the database. Note Your computer might show different names or locations for some of the Visual Studio user interface elements in the following instructions. The Visual Studio edition that you have and the settings that you use determine these elements. For more information, see Personalizing the IDE. This walkthrough was written by using Visual C# Development Settings. PrerequisitesYou must have completed Walkthrough: Simple Object Model and Query (C#). This walkthrough builds on that one, including the presence of the northwnd.mdf file in c:\linqtest5. OverviewThis walkthrough consists of three main tasks:
Mapping Relationships Across TablesAfter the Customer class definition, create the Order entity class definition that includes the following code, which indicates that Order.Customer relates as a foreign key to Customer.CustomerID. To add the Order entity class
Annotating the Customer ClassIn this step, you annotate the Customer class to indicate its relationship to the Order class. (This addition is not strictly necessary, because defining the relationship in either direction is sufficient to create the link. But adding this annotation does enable you to easily navigate objects in either direction.) To annotate the Customer class
Creating and Running a Query Across the Customer-Order RelationshipYou can now access Order objects directly from the Customer objects, or in the opposite order. You do not need an explicit join between customers and orders. To access Order objects by using Customer objects
Creating a Strongly Typed View of Your DatabaseIt is much easier to start with a strongly typed view of your database. By strongly typing the DataContext object, you do not need calls to GetTable. You can use strongly typed tables in all your queries when you use the strongly typed DataContext object. In the following steps, you will create Customers as a strongly typed table that maps to the Customers table in the database. To strongly type the DataContext object
Next StepsThe next walkthrough (Walkthrough: Manipulating Data (C#)) demonstrates how to manipulate data. That walkthrough does not require that you save the two walkthroughs in this series that you have already completed. See alsoWalkthrough: Manipulating Data (C#)This walkthrough provides a fundamental end-to-end LINQ to SQL scenario for adding, modifying, and deleting data in a database. You will use a copy of the sample Northwind database to add a customer, change the name of a customer, and delete an order. Note Your computer might show different names or locations for some of the Visual Studio user interface elements in the following instructions. The Visual Studio edition that you have and the settings that you use determine these elements. For more information, see Personalizing the IDE. This walkthrough was written by using Visual C# Development Settings. PrerequisitesThis walkthrough requires the following:
OverviewThis walkthrough consists of six main tasks:
Creating a LINQ to SQL SolutionIn this first task, you create a Visual Studio solution that contains the necessary references to build and run a LINQ to SQL project. To create a LINQ to SQL solution
Adding LINQ References and DirectivesThis walkthrough uses assemblies that might not be installed by default in your project. If System.Data.Linq is not listed as a reference in your project, add it, as explained in the following steps: To add System.Data.Linq
Adding the Northwind Code File to the ProjectThese steps assume that you have used the SQLMetal tool to generate a code file from the Northwind sample database. For more information, see the Prerequisites section earlier in this walkthrough. To add the northwind code file to the project
Setting Up the Database ConnectionFirst, test your connection to the database. Note especially that the database, Northwnd, has no i character. If you generate errors in the next steps, review the northwind.cs file to determine how the Northwind partial class is spelled. To set up and test the database connection
Creating a New EntityCreating a new entity is straightforward. You can create objects (such as Customer) by using the new keyword. In this and the following sections, you are making changes only to the local cache. No changes are sent to the database until you call SubmitChanges toward the end of this walkthrough. To add a new Customer entity object
Updating an EntityIn the following steps, you will retrieve a Customer object and modify one of its properties. To change the name of a Customer
Deleting an EntityUsing the same customer object, you can delete the first order. The following code demonstrates how to sever relationships between rows, and how to delete a row from the database. Add the following code before Console.ReadLine to see how objects can be deleted: To delete a row
Submitting Changes to the DatabaseThe final step required for creating, updating, and deleting objects, is to actually submit the changes to the database. Without this step, your changes are only local and will not appear in query results. To submit changes to the database
Note After you have added the new customer by submitting the changes, you cannot execute this solution again as is. To execute the solution again, change the name of the customer and customer ID to be added. See alsoWalkthrough: Using Only Stored Procedures (C#)This walkthrough provides a basic end-to-end LINQ to SQL scenario for accessing data by executing stored procedures only. This approach is often used by database administrators to limit how the datastore is accessed. Note You can also use stored procedures in LINQ to SQL applications to override default behavior, especially for Create, Update, and Delete processes. For more information, see Customizing Insert, Update, and Delete Operations. For purposes of this walkthrough, you will use two methods that have been mapped to stored procedures in the Northwind sample database: CustOrdersDetail and CustOrderHist. The mapping occurs when you run the SqlMetal command-line tool to generate a C# file. For more information, see the Prerequisites section later in this walkthrough. This walkthrough does not rely on the Object Relational Designer. Developers using Visual Studio can also use the O/R Designer to implement stored procedure functionality. See LINQ to SQL Tools in Visual Studio. Note Your computer might show different names or locations for some of the Visual Studio user interface elements in the following instructions. The Visual Studio edition that you have and the settings that you use determine these elements. For more information, see Personalizing the IDE. This walkthrough was written by using Visual C# Development Settings. PrerequisitesThis walkthrough requires the following:
OverviewThis walkthrough consists of six main tasks:
Creating a LINQ to SQL SolutionIn this first task, you create a Visual Studio solution that contains the necessary references to build and run a LINQ to SQL project. To create a LINQ to SQL solution
Adding the LINQ to SQL Assembly ReferenceThe LINQ to SQL assembly is not included in the standard Windows Forms Application template. You will have to add the assembly yourself, as explained in the following steps: To add System.Data.Linq.dll
Adding the Northwind Code File to the ProjectThis step assumes that you have used the SqlMetal tool to generate a code file from the Northwind sample database. For more information, see the Prerequisites section earlier in this walkthrough. To add the northwind code file to the project
Creating a Database ConnectionIn this step, you define the connection to the Northwind sample database. This walkthrough uses "c:\linqtest7\northwnd.mdf" as the path. To create the database connection
Setting up the User InterfaceIn this task you set up an interface so that users can execute stored procedures to access data in the database. In the applications that you are developing with this walkthrough, users can access data in the database only by using the stored procedures embedded in the application. To set up the user interface
To handle button clicks
Testing the ApplicationNow it is time to test your application. Note that your contact with the datastore is limited to whatever actions the two stored procedures can take. Those actions are to return the products included for any orderID you enter, or to return a history of products ordered for any CustomerID you enter. To test the application
Next StepsYou can enhance this project by making some changes. For example, you could list available stored procedures in a list box and have the user select which procedures to execute. You could also stream the output of the reports to a text file. See alsoProgramming GuideThis section contains information about how to create and use your LINQ to SQL object model. If you are using Visual Studio, you can also use the Object Relational Designer to perform many of these same tasks. You can also search Microsoft Docs for specific issues, and you can participate in the LINQ Forum, where you can discuss more complex topics in detail with experts. Finally, the LINQ to SQL: .NET Language-Integrated Query for Relational Data white paper details LINQ to SQL technology, complete with Visual Basic and C# code examples. In This Section
Creating the Object Model
Communicating with the Database
Querying the Database
Making and Submitting Data Changes
Debugging Support
Background Information Related Sections
LINQ to SQL
Stored Procedures
Introduction to LINQ (C#)
Introduction to LINQ (Visual Basic) Creating the Object ModelYou can create your object model from an existing database and use the model in its default state. You can also customize many aspects of the model and its behavior. If you are using Visual Studio, you can use the Object Relational Designer to create your object model. In This Section
How to: Generate the Object Model in Visual Basic or C#
How to: Generate the Object Model as an External File
How to: Generate Customized Code by Modifying a DBML File
How to: Validate DBML and External Mapping Files
How to: Make Entities Serializable
How to: Customize Entity Classes by Using the Code Editor Related Sections
The LINQ to SQL Object Model
Typical Steps for Using LINQ to SQL How to: Generate the Object Model in Visual Basic or C#In LINQ to SQL, an object model in your own programming language is mapped to a relational database. Two tools are available for automatically generating a Visual Basic or C# model from the metadata of an existing database.
Documentation for the O/R Designer provides examples of how to generate a Visual Basic or C# object model by using the O/R Designer. The following information provide examples of how to use the SQLMetal command-line tool. For more information, see SqlMetal.exe (Code Generation Tool). ExampleThe SQLMetal command line shown in the following example produces Visual Basic code as the attribute-based object model of the Northwind sample database. Stored procedures and functions are also rendered. sqlmetal /code:northwind.vb /language:vb "c:\northwnd.mdf" /sprocs /functions ExampleThe SQLMetal command line shown in the following example produces C# code as the attribute-based object model of the Northwind sample database. Stored procedures and functions are also rendered, and table names are automatically pluralized. sqlmetal /code:northwind.cs /language:csharp "c:\northwnd.mdf" /sprocs /functions /pluralize See also
How to: Generate the Object Model as an External FileAs an alternative to attribute-based mapping, you can generate your object model as an external XML file by using the SQLMetal command-line tool. For more information, see SqlMetal.exe (Code Generation Tool). By using an external XML mapping file, you reduce clutter in your code. You can also change behavior by modifying the external file without recompiling the binaries of your application. For more information, see External Mapping. Note The Object Relational Designer does not support generation of an external mapping file. ExampleThe following command generates an external mapping file from the Northwind sample database. sqlmetal /server:myserver /database:northwind /map:externalfile.xml ExampleThe following excerpt from an external mapping file shows the mapping for the Customers table in the Northwind sample database. This excerpt was generated by executing SQLMetal with the /map option. XML<?xml version="1.0" encoding="utf-8"?> <Database xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" Name="northwnd"> <Table Name="Customers"> <Type Name=".Customer"> <Column Name="CustomerID" Member="CustomerID" Storage="_CustomerID" DbType="NChar(5) NOT NULL" CanBeNull="False" IsPrimaryKey="True" /> <Column Name="CompanyName" Member="CompanyName" Storage="_CompanyName" DbType="NVarChar(40) NOT NULL" CanBeNull="False" /> <Column Name="ContactName" Member="ContactName" Storage="_ContactName" DbType="NVarChar(30)" /> <Column Name="ContactTitle" Member="ContactTitle" Storage="_ContactTitle" DbType="NVarChar(30)" /> <Column Name="Address" Member="Address" Storage="_Address" DbType="NVarChar(60)" /> <Column Name="City" Member="City" Storage="_City" DbType="NVarChar(15)" /> <Column Name="Region" Member="Region" Storage="_Region" DbType="NVarChar(15)" /> <Column Name="PostalCode" Member="PostalCode" Storage="_PostalCode" DbType="NVarChar(10)" /> <Column Name="Country" Member="Country" Storage="_Country" DbType="NVarChar(15)" /> <Column Name="Phone" Member="Phone" Storage="_Phone" DbType="NVarChar(24)" /> <Column Name="Fax" Member="Fax" Storage="_Fax" DbType="NVarChar(24)" /> <Association Name="FK_CustomerCustomerDemo_Customers" Member="CustomerCustomerDemos" Storage="_CustomerCustomerDemos" ThisKey="CustomerID" OtherTable="CustomerCustomerDemo" OtherKey="CustomerID" DeleteRule="NO ACTION" /> <Association Name="FK_Orders_Customers" Member="Orders" Storage="_Orders" ThisKey="CustomerID" OtherTable="Orders" OtherKey="CustomerID" DeleteRule="NO ACTION" /> </Type> </Table> </Database> See alsoHow to: Generate Customized Code by Modifying a DBML FileYou can generate Visual Basic or C# source code from a database markup language (.dbml) metadata file. This approach provides an opportunity to customize the default .dbml file before you generate the application mapping code. This is an advanced feature. The steps in this process are as follows:
The following examples use the SQLMetal command-line tool. For more information, see SqlMetal.exe (Code Generation Tool). ExampleThe following code generates a .dbml file from the Northwind sample database. As source for the database metadata, you can use either the name of the database or the name of the .mdf file. sqlmetal /server:myserver /database:northwind /dbml:mymeta.dbml sqlmetal /dbml:mymeta.dbml mydbfile.mdf ExampleThe following code generates Visual Basic or C# source code file from a .dbml file. sqlmetal /namespace:nwind /code:nwind.vb /language:vb DBMLFile.dbml sqlmetal /namespace:nwind /code:nwind.cs /language:csharp DBMLFile.dbml See alsoHow to: Validate DBML and External Mapping FilesExternal mapping files and .dbml files that you modify must be validated against their respective schema definitions. This topic provides Visual Studio users with the steps to implement the validation process. Note Your computer might show different names or locations for some of the Visual Studio user interface elements in the following instructions. The Visual Studio edition that you have and the settings that you use determine these elements. For more information, see Personalizing the IDE. To validate a .dbml or XML file
Alternate Method for Supplying Schema DefinitionIf for some reason the appropriate .xsd file does not appear in the XML Schemas dialog box, you can download the .xsd file from a Help topic. The following steps help you save the downloaded file in the Unicode format required by the Visual Studio XML Editor. To copy a schema definition file from a Help topic
See alsoHow to: Make Entities SerializableYou can make entities serializable when you generate your code. Entity classes are decorated with the DataContractAttribute attribute, and columns with the DataMemberAttribute attribute. Developers using Visual Studio can use the Object Relational Designer for this purpose. If you are using the SQLMetal command-line tool, use the /serialization option with the unidirectional argument. For more information, see SqlMetal.exe (Code Generation Tool). ExampleThe following SQLMetal command lines produce files that have serializable entities. sqlmetal /code:nwserializable.vb /language:vb "c:\northwnd.mdf" /sprocs /functions /pluralize /serialization:unidirectional sqlmetal /code:nwserializable.cs /language:csharp "c:\northwnd.mdf" /sprocs /functions /pluralize /serialization:unidirectional See alsoHow to: Customize Entity Classes by Using the Code EditorDevelopers using Visual Studio can use the Object Relational Designer to create or customize their entity classes. You can also use the Visual Studio code editor to write your own mapping code or to customize code that has already been generated. For more information, see Attribute-Based Mapping. The topics in this section describe how to customize your object model.
How to: Specify Database Names
How to: Represent Tables as Classes
How to: Represent Columns as Class Members
How to: Represent Primary Keys
How to: Map Database Relationships
How to: Represent Columns as Database-Generated
How to: Represent Columns as Timestamp or Version Columns
How to: Specify Database Data Types
How to: Represent Computed Columns
How to: Specify Private Storage Fields
How to: Represent Columns as Allowing Null Values
How to: Map Inheritance Hierarchies
How to: Specify Concurrency-Conflict Checking See alsoHow to: Specify Database NamesUse the Name property on a DatabaseAttribute attribute to specify the name of a database when a name is not supplied by the connection. For code samples, see Name. To specify the name of the database
See alsoHow to: Represent Tables as ClassesUse the LINQ to SQL TableAttribute attribute to designate a class as an entity class associated with a database table. To map a class to a database table
ExampleThe following code establishes the Customer class as an entity class that is associated with the Customers database table. C#[Table(Name = "Customers")] public class Customer { // ... } You do not have to specify the Name property if the name can be inferred. If you do not specify a name, the name is presumed to be the same name as that of the property or field. See alsoHow to: Represent Columns as Class MembersUse the LINQ to SQL ColumnAttribute attribute to associate a field or property with a database column. To map a field or property to a database column
ExampleThe following code maps the CustomerID field in the Customer class to the CustomerID column in the Customers database table. C#[Table(Name="Customers")] public class customer { [Column(Name="CustomerID")] public string CustomerID; // ... } You do not have to specify the Name property if the name can be inferred. If you do not specify a name, the name is presumed to be the same name as that of the property or field. See alsoHow to: Represent Primary KeysUse the LINQ to SQL IsPrimaryKey property on the ColumnAttribute attribute to designate a property or field to represent the primary key for a database column. For code examples, see IsPrimaryKey. Note LINQ to SQL does not support computed columns as primary keys. To designate a property or field as a primary key
See alsoHow to: Map Database RelationshipsYou can encode as property references in your entity class any data relationships that will always be the same. In the Northwind sample database, for example, because customers typically place orders, there is always a relationship in the model between customers and their orders. LINQ to SQL defines an AssociationAttribute attribute to help represent such relationships. This attribute is used together with the EntitySet<TEntity> and EntityRef<TEntity> types to represent what would be a foreign key relationship in a database. For more information, see the Association Attribute section of Attribute-Based Mapping. Note AssociationAttribute and ColumnAttribute Storage property values are case sensitive. For example, ensure that values used in the attribute for the AssociationAttribute.Storage property match the case for the corresponding property names used elsewhere in the code. This applies to all .NET programming languages, even those which are not typically case sensitive, including Visual Basic. For more information about the Storage property, see DataAttribute.Storage. Most relationships are one-to-many, as in the example later in this topic. You can also represent one-to-one and many-to-many relationships as follows:
ExampleIn the following one-to-many example, the Customer class has a property that declares the relationship between customers and their orders. The Orders property is of type EntitySet<TEntity>. This type signifies that this relationship is one-to-many (one customer to many orders). The OtherKey property is used to describe how this association is accomplished, namely, by specifying the name of the property in the related class to be compared with this one. In this example, the CustomerID property is compared, just as a database join would compare that column value. Note If you are using Visual Studio, you can use the Object Relational Designer to create an association between classes. C#[Table(Name = "Customers")] public partial class Customer { [Column(IsPrimaryKey = true)] public string CustomerID; // ... private EntitySet<Order> _Orders; [Association(Storage = "_Orders", OtherKey = "CustomerID")] public EntitySet<Order> Orders { get { return this._Orders; } set { this._Orders.Assign(value); } } } ExampleYou can also reverse the situation. Instead of using the Customer class to describe the association between customers and orders, you can use the Order class. The Order class uses the EntityRef<TEntity> type to describe the relationship back to the customer, as in the following code example. Note The EntityRef<TEntity> class supports deferred loading. For more information, see Deferred versus Immediate Loading. C#[Table(Name = "Orders")] public class Order { [Column(IsPrimaryKey = true)] public int OrderID; [Column] public string CustomerID; private EntityRef<Customer> _Customer; [Association(Storage = "_Customer", ThisKey = "CustomerID")] public Customer Customer { get { return this._Customer.Entity; } set { this._Customer.Entity = value; } } } See alsoHow to: Represent Columns as Database-GeneratedUse the LINQ to SQL IsDbGenerated property on the ColumnAttribute attribute to designate a field or property as representing a database-generated column. For code examples, see IsDbGenerated. To designate a field or property as representing a database-generated column
See alsoHow to: Represent Columns as Timestamp or Version ColumnsUse the LINQ to SQL IsVersion property of the ColumnAttribute attribute to designate a field or property as representing a database column that holds database timestamps or version numbers. For code examples, see IsVersion. To designate a field or property as representing a timestamp or version column
See also
How to: Specify Database Data TypesUse the LINQ to SQL DbType property on a ColumnAttribute attribute to specify the exact text that defines the column in a T-SQL table declaration. You must specify the DbType property only if you plan to use CreateDatabase to create an instance of the database. For code examples, see DbType. To specify text to define a data type in a T-SQL table
See alsoHow to: Represent Computed ColumnsUse the LINQ to SQL Expression property on a ColumnAttribute attribute to represent a column whose contents are the result of calculation. For code examples, see Expression. Note LINQ to SQL does not support computed columns as primary keys. To represent a computed column
See alsoHow to: Specify Private Storage FieldsUse the LINQ to SQL Storage property on the DataAttribute attribute to designate the name of an underlying storage field. For code examples, see Storage. To specify the name of an underlying storage field
See alsoHow to: Represent Columns as Allowing Null ValuesUse the LINQ to SQL CanBeNull property on the ColumnAttribute attribute to specify that the associated database column can hold null values. For code examples, see CanBeNull. To designate a column as allowing null values
See alsoHow to: Map Inheritance HierarchiesTo implement inheritance mapping in LINQ, you must specify the attributes and attribute properties on the root class of the inheritance hierarchy as described in the following steps. Developers using Visual Studio can use the Object Relational Designer to map inheritance hierarchies. See How to: Configure inheritance by using the O/R Designer. Note No special attributes or properties are required on the subclasses. Note especially that subclasses do not have the TableAttribute attribute. To map an inheritance hierarchy
ExampleNote If you are using Visual Studio, you can use the Object Relational Designer to configure inheritance. See How to: Configure inheritance by using the O/R Designer In the following code example, Vehicle is defined as the root class, and the previous steps have been implemented to describe the hierarchy for LINQ. C#[Table] [InheritanceMapping(Code = "C", Type = typeof(Car))] [InheritanceMapping(Code = "T", Type = typeof(Truck))] [InheritanceMapping(Code = "V", Type = typeof(Vehicle), IsDefault = true)] public class Vehicle { [Column(IsDiscriminator = true)] public string DiscKey; [Column(IsPrimaryKey = true)] public string VIN; [Column] public string MfgPlant; } public class Car : Vehicle { [Column] public int TrimCode; [Column] public string ModelName; } public class Truck : Vehicle { [Column] public int Tonnage; [Column] public int Axles; } See alsoHow to: Specify Concurrency-Conflict CheckingYou can specify which columns of the database are to be checked for concurrency conflicts when you call SubmitChanges. For more information, see How to: Specify Which Members are Tested for Concurrency Conflicts. ExampleThe following code specifies that the HomePage member should never be tested during update checks. For more information, see UpdateCheck. C#[Column(Storage="_HomePage", DbType="NText", UpdateCheck=UpdateCheck.Never)] public string HomePage { get { return this._HomePage; } set { if ((this._HomePage != value)) { this.OnHomePageChanging(value); this.SendPropertyChanging(); this._HomePage = value; this.SendPropertyChanged("HomePage"); this.OnHomePageChanged(); } } } See alsoCommunicating with the DatabaseThe topics in this section describe some basic aspects of how you establish and maintain communication with the database. In This Section
How to: Connect to a Database
How to: Directly Execute SQL Commands
How to: Reuse a Connection Between an ADO.NET Command and a DataContext See alsoCommunicating with the DatabaseThe topics in this section describe some basic aspects of how you establish and maintain communication with the database. In This Section
How to: Connect to a Database
How to: Directly Execute SQL Commands
How to: Reuse a Connection Between an ADO.NET Command and a DataContext See alsoHow to: Connect to a DatabaseThe DataContext is the main conduit by which you connect to a database, retrieve objects from it, and submit changes back to it. You use the DataContext just as you would use an ADO.NET SqlConnection. In fact, the DataContext is initialized with a connection or connection string that you supply. For more information, see DataContext Methods (O/R Designer). The purpose of the DataContext is to translate your requests for objects into SQL queries to be made against the database, and then to assemble objects out of the results. The DataContext enables Language-Integrated Query (LINQ) by implementing the same operator pattern as the Standard Query Operators, such as Where and Select. Important Maintaining a secure connection is of the highest importance. For more information, see Security in LINQ to SQL. ExampleIn the following example, the DataContext is used to connect to the Northwind sample database and to retrieve rows of customers whose city is London. C#// DataContext takes a connection string. DataContext db = new DataContext(@"c:\Northwind.mdf"); // Get a typed table to run queries. Table<Customer> Customers = db.GetTable<Customer>(); // Query for customers from London. var query = from cust in Customers where cust.City == "London" select cust; foreach (var cust in query) Console.WriteLine("id = {0}, City = {1}", cust.CustomerID, cust.City); Each database table is represented as a Table collection available by way of the GetTable method, by using the entity class to identify it. ExampleBest practice is to declare a strongly typed DataContext instead of relying on the basic DataContext class and the GetTable method. A strongly typed DataContext declares all Table collections as members of the context, as in the following example. C#public partial class Northwind : DataContext { public Table<Customer> Customers; public Table<Order> Orders; public Northwind(string connection) : base(connection) { } } You can then express the query for customers from London more simply as: C#Northwnd db = new Northwnd(@"c:\Northwnd.mdf"); var query = from cust in db.Customers where cust.City == "London" select cust; foreach (var cust in query) Console.WriteLine("id = {0}, City = {1}", cust.CustomerID, cust.City); See alsoHow to: Directly Execute SQL CommandsAssuming a DataContext connection, you can use ExecuteCommand to execute SQL commands that do not return objects. ExampleThe following example causes SQL Server to increase UnitPrice by 1.00. C#db.ExecuteCommand("UPDATE Products SET UnitPrice = UnitPrice + 1.00"); See alsoHow to: Reuse a Connection Between an ADO.NET Command and a DataContextBecause LINQ to SQL is a part of the ADO.NET family of technologies and is based on services provided by ADO.NET, you can reuse a connection between an ADO.NET command and a DataContext. ExampleThe following example shows how to reuse the same connection between an ADO.NET command and the DataContext. C#string connString = @"Data Source=.\SQLEXPRESS;AttachDbFilename=c:\northwind.mdf; Integrated Security=True; Connect Timeout=30; User Instance=True"; SqlConnection nwindConn = new SqlConnection(connString); nwindConn.Open(); Northwnd interop_db = new Northwnd(nwindConn); SqlTransaction nwindTxn = nwindConn.BeginTransaction(); try { SqlCommand cmd = new SqlCommand( "UPDATE Products SET QuantityPerUnit = 'single item' WHERE ProductID = 3"); cmd.Connection = nwindConn; cmd.Transaction = nwindTxn; cmd.ExecuteNonQuery(); interop_db.Transaction = nwindTxn; Product prod1 = interop_db.Products .First(p => p.ProductID == 4); Product prod2 = interop_db.Products .First(p => p.ProductID == 5); prod1.UnitsInStock -= 3; prod2.UnitsInStock -= 5; interop_db.SubmitChanges(); nwindTxn.Commit(); } catch (Exception e) { Console.WriteLine(e.Message); Console.WriteLine("Error submitting changes... all changes rolled back."); } nwindConn.Close(); See alsoQuerying the DatabaseThis group of topics describes how to develop and execute queries in LINQ to SQL projects. In This Section
How to: Query for Information
How to: Retrieve Information As Read-Only
How to: Control How Much Related Data Is Retrieved
How to: Filter Related Data
How to: Turn Off Deferred Loading
How to: Directly Execute SQL Queries
How to: Store and Reuse Queries
How to: Handle Composite Keys in Queries
How to: Retrieve Many Objects At Once
How to: Filter at the DataContext Level
Query Examples How to: Query for InformationQueries in LINQ to SQL use the same syntax as queries in LINQ. The only difference is that the objects referenced in LINQ to SQL queries are mapped to elements in a database. For more information, see Introduction to LINQ Queries (C#). LINQ to SQL translates the queries you write into equivalent SQL queries and sends them to the server for processing. Some features of LINQ queries might need special attention in LINQ to SQL applications. For more information, see Query Concepts. ExampleThe following query asks for a list of customers from London. In this example, Customers is a table in the Northwind sample database. C#Northwnd db = new Northwnd(@"c:\northwnd.mdf"); // Query for customers in London. IQueryable<Customer> custQuery = from cust in db.Customers where cust.City == "London" select cust; See alsoHow to: Retrieve Information As Read-OnlyWhen you do not intend to change the data, you can increase the performance of queries by seeking read-only results. You implement read-only processing by setting ObjectTrackingEnabled to false. Note When ObjectTrackingEnabled is set to false, DeferredLoadingEnabled is implicitly set to false. ExampleThe following code retrieves a read-only collection of employee hire dates. C#Northwnd db = new Northwnd(@"c:\northwnd.mdf"); db.ObjectTrackingEnabled = false; IOrderedQueryable<Employee> hireQuery = from emp in db.Employees orderby emp.HireDate select emp; foreach (Employee empObj in hireQuery) { Console.WriteLine("EmpID = {0}, Date Hired = {1}", empObj.EmployeeID, empObj.HireDate); } See alsoHow to: Control How Much Related Data Is RetrievedUse the LoadWith method to specify which data related to your main target should be retrieved at the same time. For example, if you know you will need information about customers' orders, you can use LoadWith to make sure that the order information is retrieved at the same time as the customer information. This approach results in only one trip to the database for both sets of information. Note You can retrieve data related to the main target of your query by retrieving a cross-product as one large projection, such as retrieving orders when you target customers. But this approach often has disadvantages. For example, the results are just projections and not entities that can be changed and persisted by LINQ to SQL. And you can be retrieving lots of data that you do not need. ExampleIn the following example, all the Orders for all the Customers who are located in London are retrieved when the query is executed. As a result, successive access to the Orders property on a Customer object does not trigger a new database query. C#Northwnd db = new Northwnd(@"c:\northwnd.mdf"); DataLoadOptions dlo = new DataLoadOptions(); dlo.LoadWith<Customer>(c => c.Orders); db.LoadOptions = dlo; var londonCustomers = from cust in db.Customers where cust.City == "London" select cust; foreach (var custObj in londonCustomers) { Console.WriteLine(custObj.CustomerID); } See alsoHow to: Filter Related DataUse the AssociateWith method to specify sub-queries to limit the amount of retrieved data. ExampleIn the following example, the AssociateWith method limits the Orders retrieved to those that have not been shipped today. Without this approach, all Orders would have been retrieved even though only a subset is desired. C#Northwnd db = new Northwnd(@"c:\northwnd.mdf"); DataLoadOptions dlo = new DataLoadOptions(); dlo.AssociateWith<Customer>(c => c.Orders.Where(p => p.ShippedDate != DateTime.Today)); db.LoadOptions = dlo; var custOrderQuery = from cust in db.Customers where cust.City == "London" select cust; foreach (Customer custObj in custOrderQuery) { Console.WriteLine(custObj.CustomerID); foreach (Order ord in custObj.Orders) { Console.WriteLine("\t {0}",ord.OrderDate); } } See alsoHow to: Turn Off Deferred LoadingYou can turn off deferred loading by setting DeferredLoadingEnabled to false. For more information, see Deferred versus Immediate Loading. Note Deferred loading is turned off by implication when object tracking is turned off. For more information, see How to: Retrieve Information As Read-Only. ExampleThe following example shows how to turn off deferred loading by setting DeferredLoadingEnabled to false. C#Northwnd db = new Northwnd(@"c:\northwnd.mdf"); db.DeferredLoadingEnabled = false; DataLoadOptions ds = new DataLoadOptions(); ds.LoadWith<Customer>(c => c.Orders); ds.LoadWith<Order>(o => o.OrderDetails); db.LoadOptions = ds; var custQuery = from cust in db.Customers where cust.City == "London" select cust; foreach (Customer custObj in custQuery) { Console.WriteLine("Customer ID: {0}", custObj.CustomerID); foreach (Order ord in custObj.Orders) { Console.WriteLine("\tOrder ID: {0}", ord.OrderID); foreach (OrderDetail detail in ord.OrderDetails) { Console.WriteLine("\t\tProduct ID: {0}", detail.ProductID); } } } See alsoHow to: Directly Execute SQL QueriesLINQ to SQL translates the queries you write into parameterized SQL queries (in text form) and sends them to the SQL server for processing. SQL cannot execute the variety of methods that might be locally available to your application. LINQ to SQL tries to convert these local methods to equivalent operations and functions that are available inside the SQL environment. Most methods and operators on .NET Framework built-in types have direct translations to SQL commands. Some can be produced from the functions that are available. Those that cannot be produced generate run-time exceptions. For more information, see SQL-CLR Type Mapping. In cases where a LINQ to SQL query is insufficient for a specialized task, you can use the ExecuteQuery method to execute a SQL query, and then convert the result of your query directly into objects. ExampleIn the following example, assume that the data for the Customer class is spread over two tables (customer1 and customer2). The query returns a sequence of Customer objects. C#Northwnd db = new Northwnd(@"c:\northwnd.mdf"); IEnumerable<Customer> results = db.ExecuteQuery<Customer> (@"SELECT c1.custid as CustomerID, c2.custName as ContactName FROM customer1 as c1, customer2 as c2 WHERE c1.custid = c2.custid" ); As long as the column names in the tabular results match column properties of your entity class, LINQ to SQL creates your objects out of any SQL query. ExampleThe ExecuteQuery method also allows for parameters. Use code such as the following to execute a parameterized query. C#Northwnd db = new Northwnd(@"c:\northwnd.mdf"); IEnumerable<Customer> results = db.ExecuteQuery<Customer> ("SELECT contactname FROM customers WHERE city = {0}", "London"); The parameters are expressed in the query text by using the same curly notation used by Console.WriteLine() and String.Format(). In fact, String.Format() is actually called on the query string you provide, substituting the curly braced parameters with generated parameter names such as @p0, @p1 …, @p(n). See alsoHow to: Store and Reuse QueriesWhen you have an application that executes structurally similar queries many times, you can often increase performance by compiling the query one time and executing it several times with different parameters. For example, an application might have to retrieve all the customers who are in a particular city, where the city is specified at runtime by the user in a form. LINQ to SQL supports the use of compiled queries for this purpose. Note This pattern of usage represents the most common use for compiled queries. Other approaches are possible. For example, compiled queries can be stored as static members on a partial class that extends the code generated by the designer. ExampleIn many scenarios you might want to reuse the queries across thread boundaries. In such cases, storing the compiled queries in static variables is especially effective. The following code example assumes a Queries class designed to store compiled queries, and assumes a Northwind class that represents a strongly typed DataContext. C#public static Func<Northwnd, string, IQueryable<Customer>> CustomersByCity = CompiledQuery.Compile((Northwnd db, string city) => from c in db.Customers where c.City == city select c); public static Func<Northwnd, string, IQueryable<Customer>> CustomersById = CompiledQuery.Compile((Northwnd db, string id) => db.Customers.Where(c => c.CustomerID == id));C# // The following example invokes such a compiled query in the main // program. public IEnumerable<Customer> GetCustomersByCity(string city) { var myDb = GetNorthwind(); return Queries.CustomersByCity(myDb, city); } ExampleYou cannot currently store (in static variables) queries that return an anonymous type, because type has no name to provide as a generic argument. The following example shows how you can work around the issue by creating a type that can represent the result, and then use it as a generic argument. C#class SimpleCustomer { public string ContactName { get; set; } } class Queries2 { public static Func<Northwnd, string, IEnumerable<SimpleCustomer>> CustomersByCity = CompiledQuery.Compile<Northwnd, string, IEnumerable<SimpleCustomer>>( (Northwnd db, string city) => from c in db.Customers where c.City == city select new SimpleCustomer { ContactName = c.ContactName }); } See alsoHow to: Handle Composite Keys in QueriesSome operators can take only one argument. If your argument must include more than one column from the database, you must create an anonymous type to represent the combination. ExampleThe following example shows a query that invokes the GroupBy operator, which can take only one key argument. C#var query = from cust in db.Customers group cust.ContactName by new { City = cust.City, Region = cust.Region }; foreach (var grp in query) { Console.WriteLine("\nLocation Key: {0}", grp.Key); foreach (var listing in grp) { Console.WriteLine("\t{0}", listing); } } ExampleThe same situation pertains to joins, as in the following example: C#var query = from ord in db.Orders from prod in db.Products join det in db.OrderDetails on new { ord.OrderID, prod.ProductID } equals new { det.OrderID, det.ProductID } into details from det in details select new { ord.OrderID, prod.ProductID, det.UnitPrice }; See alsoHow to: Retrieve Many Objects At OnceYou can retrieve many objects in one query by using LoadWith. ExampleThe following code uses the LoadWith method to retrieve both Customer and Order objects. C#Northwnd db = new Northwnd(@"northwnd.mdf"); DataLoadOptions ds = new DataLoadOptions(); ds.LoadWith<Customer>(c => c.Orders); ds.LoadWith<Order>(o => o.OrderDetails); db.LoadOptions = ds; var custQuery = from cust in db.Customers where cust.City == "London" select cust; foreach (Customer custObj in custQuery) { Console.WriteLine("Customer ID: {0}", custObj.CustomerID); foreach (Order ord in custObj.Orders) { Console.WriteLine("\tOrder ID: {0}", ord.OrderID); foreach (OrderDetail detail in ord.OrderDetails) { Console.WriteLine("\t\tProduct ID: {0}", detail.ProductID); } } } See alsoHow to: Filter at the DataContext LevelYou can filter EntitySets at the DataContext level. Such filters apply to all queries done with that DataContext instance. ExampleIn the following example, DataLoadOptions.AssociateWith(LambdaExpression) is used to filter the pre-loaded orders for customers by ShippedDate. C#Northwnd db = new Northwnd(@"northwnd.mdf"); // Preload Orders for Customer. // One directive per relationship to be preloaded. DataLoadOptions ds = new DataLoadOptions(); ds.LoadWith<Customer>(c => c.Orders); ds.AssociateWith<Customer> (c => c.Orders.Where(p => p.ShippedDate != DateTime.Today)); db.LoadOptions = ds; var custQuery = from cust in db.Customers where cust.City == "London" select cust; foreach (Customer custObj in custQuery) { Console.WriteLine("Customer ID: {0}", custObj.CustomerID); foreach (Order ord in custObj.Orders) { Console.WriteLine("\tOrder ID: {0}", ord.OrderID); foreach (OrderDetail detail in ord.OrderDetails) { Console.WriteLine("\t\tProduct ID: {0}", detail.ProductID); } } } See alsoQuery ExamplesThis section provides Visual Basic and C# examples of typical LINQ to SQL queries. Developers using Visual Studio can find many more examples in a sample solution available in the Samples section. For more information, see Samples. Important db is often used in code examples in LINQ to SQL documentation. db is assumed to be an instance of a Northwind class, which inherits from DataContext. In This Section
Aggregate Queries
Return the First Element in a Sequence
Return Or Skip Elements in a Sequence
Sort Elements in a Sequence
Group Elements in a Sequence
Eliminate Duplicate Elements from a Sequence
Determine if Any or All Elements in a Sequence Satisfy a Condition
Concatenate Two Sequences
Return the Set Difference Between Two Sequences
Return the Set Intersection of Two Sequences
Return the Set Union of Two Sequences
Convert a Sequence to an Array
Convert a Sequence to a Generic List
Convert a Type to a Generic IEnumerable
Formulate Joins and Cross-Product Queries
Formulate Projections Related Sections
Standard Query Operators Overview (C#)
Standard Query Operators Overview (Visual Basic)
Query Concepts
Programming Guide Aggregate QueriesLINQ to SQL supports the Average, Count, Max, Min, and Sum aggregate operators. Note the following characteristics of aggregate operators in LINQ to SQL:
The examples in the following topics derive from the Northwind sample database. For more information, see Downloading Sample Databases. In This Section
Return the Average Value From a Numeric Sequence
Count the Number of Elements in a Sequence
Find the Maximum Value in a Numeric Sequence
Find the Minimum Value in a Numeric Sequence
Compute the Sum of Values in a Numeric Sequence Related Sections
Query Examples
Query Concepts
Introduction to LINQ Queries (C#) Return the Average Value From a Numeric SequenceThe Average operator computes the average of a sequence of numeric values. Note The LINQ to SQL translation of Average of integer values is computed as an integer, not as a double. ExampleThe following example returns the average of Freight values in the Orders table. Results from the sample Northwind database would be 78.2442. C#System.Nullable<Decimal> averageFreight = (from ord in db.Orders select ord.Freight) .Average(); Console.WriteLine(averageFreight); ExampleThe following example returns the average of the unit price of all Products in the Products table. Results from the sample Northwind database would be 28.8663. C#System.Nullable<Decimal> averageUnitPrice = (from prod in db.Products select prod.UnitPrice) .Average(); Console.WriteLine(averageUnitPrice); ExampleThe following example uses the Average operator to find those Products whose unit price is higher than the average unit price of the category it belongs to. The example then displays the results in groups. Note that this example requires the use of the var keyword in C#, because the return type is anonymous. C#var priceQuery = from prod in db.Products group prod by prod.CategoryID into grouping select new { grouping.Key, ExpensiveProducts = from prod2 in grouping where prod2.UnitPrice > grouping.Average(prod3 => prod3.UnitPrice) select prod2 }; foreach (var grp in priceQuery) { Console.WriteLine(grp.Key); foreach (var listing in grp.ExpensiveProducts) { Console.WriteLine(listing.ProductName); } } If you run this query against the Northwind sample database, the results should resemble of the following: 1 Côte de Blaye Ipoh Coffee 2 Grandma's Boysenberry Spread Northwoods Cranberry Sauce Sirop d'érable Vegie-spread 3 Sir Rodney's Marmalade Gumbär Gummibärchen Schoggi Schokolade Tarte au sucre 4 Queso Manchego La Pastora Mascarpone Fabioli Raclette Courdavault Camembert Pierrot Gudbrandsdalsost Mozzarella di Giovanni 5 Gustaf's Knäckebröd Gnocchi di nonna Alice Wimmers gute Semmelknödel 6 Mishi Kobe Niku Thüringer Rostbratwurst 7 Rössle Sauerkraut Manjimup Dried Apples 8 Ikura Carnarvon Tigers Nord-Ost Matjeshering Gravad lax See alsoCount the Number of Elements in a SequenceUse the Count operator to count the number of elements in a sequence. Running this query against the Northwind sample database produces an output of 91. ExampleThe following example counts the number of Customers in the database. C#System.Int32 customerCount = db.Customers.Count(); Console.WriteLine(customerCount); ExampleThe following example counts the number of products in the database that have not been discontinued. Running this example against the Northwind sample database produces an output of 69. C#System.Int32 notDiscontinuedCount = (from prod in db.Products where !prod.Discontinued select prod) .Count(); Console.WriteLine(notDiscontinuedCount); See alsoFind the Maximum Value in a Numeric SequenceUse the Max operator to find the highest value in a sequence of numeric values. ExampleThe following example finds the latest date of hire for any employee. If you run this query against the sample Northwind database, the output is: 11/15/1994 12:00:00 AM. C#System.Nullable<DateTime> latestHireDate = (from emp in db.Employees select emp.HireDate) .Max(); Console.WriteLine(latestHireDate); ExampleThe following example finds the most units in stock for any product. If you run this example against the sample Northwind database, the output is: 125. C#System.Nullable<Int16> maxUnitsInStock = (from prod in db.Products select prod.UnitsInStock) .Max(); Console.WriteLine(maxUnitsInStock); ExampleThe following example uses Max to find the Products that have the highest unit price in each category. The output then lists the results by category. C#var maxQuery = from prod in db.Products group prod by prod.CategoryID into grouping select new { grouping.Key, MostExpensiveProducts = from prod2 in grouping where prod2.UnitPrice == grouping.Max(prod3 => prod3.UnitPrice) select prod2 }; foreach (var grp in maxQuery) { Console.WriteLine(grp.Key); foreach (var listing in grp.MostExpensiveProducts) { Console.WriteLine(listing.ProductName); } } If you run the previous query against the Northwind sample database, your results will resemble the following: 1 Côte de Blaye 2 Vegie-spread 3 Sir Rodney's Marmalade 4 Raclette Courdavault 5 Gnocchi di nonna Alice 6 Thüringer Rostbratwurst 7 Manjimup Dried Apples 8 Carnarvon Tigers See alsoFind the Minimum Value in a Numeric SequenceUse the Min operator to return the minimum value from a sequence of numeric values. ExampleThe following example finds the lowest unit price of any product. If you run this query against the Northwind sample database, the output is: 2.5000. C#System.Nullable<Decimal> lowestUnitPrice = (from prod in db.Products select prod.UnitPrice) .Min(); Console.WriteLine(lowestUnitPrice); ExampleThe following example finds the lowest freight amount for any order. If you run this query against the Northwind sample database, the output is: 0.0200. C#System.Nullable<Decimal> lowestFreight = (from ord in db.Orders select ord.Freight) .Min(); Console.WriteLine(lowestFreight); ExampleThe following example uses Min to find the Products that have the lowest unit price in each category. The output is arranged by category. C#var minQuery = from prod in db.Products group prod by prod.CategoryID into grouping select new { grouping.Key, LeastExpensiveProducts = from prod2 in grouping where prod2.UnitPrice == grouping.Min(prod3 => prod3.UnitPrice) select prod2 }; foreach (var grp in minQuery) { Console.WriteLine(grp.Key); foreach (var listing in grp.LeastExpensiveProducts) { Console.WriteLine(listing.ProductName); } } If you run the previous query against the Northwind sample database, your results will resemble the following: 1 Guaraná Fantástica 2 Aniseed Syrup 3 Teatime Chocolate Biscuits 4 Geitost 5 Filo Mix 6 Tourtière 7 Longlife Tofu 8 Konbu See alsoCompute the Sum of Values in a Numeric SequenceUse the Sum operator to compute the sum of numeric values in a sequence. Note the following characteristics of the Sum operator in LINQ to SQL:
ExampleThe following example finds the total freight of all orders in the Order table. If you run this query against the Northwind sample database, the output is: 64942.6900. C#System.Nullable<Decimal> totalFreight = (from ord in db.Orders select ord.Freight) .Sum(); Console.WriteLine(totalFreight); ExampleThe following example finds the total number of units on order for all products. If you run this query against the Northwind sample database, the output is: 780. Note that you must cast short types (for example, UnitsOnOrder) because Sum has no overload for short types. C#System.Nullable<long> totalUnitsOnOrder = (from prod in db.Products select (long)prod.UnitsOnOrder) .Sum(); Console.WriteLine(totalUnitsOnOrder); See alsoReturn the First Element in a SequenceUse the First operator to return the first element in a sequence. Queries that use First are executed immediately. Note LINQ to SQL does not support the Last operator. ExampleThe following code finds the first Shipper in a table: If you run this query against the Northwind sample database, the results are ID = 1, Company = Speedy Express. C#Shipper shipper = db.Shippers.First(); Console.WriteLine("ID = {0}, Company = {1}", shipper.ShipperID, shipper.CompanyName); ExampleThe following code finds the single Customer that has the CustomerID BONAP. If you run this query against the Northwind sample database, the results are ID = BONAP, Contact = Laurence Lebihan. C#Customer custQuery = (from custs in db.Customers where custs.CustomerID == "BONAP" select custs) .First(); Console.WriteLine("ID = {0}, Contact = {1}", custQuery.CustomerID, custQuery.ContactName); See alsoReturn Or Skip Elements in a SequenceUse the Take operator to return a given number of elements in a sequence and then skip over the remainder. Use the Skip operator to skip over a given number of elements in a sequence and then return the remainder. Note Take and Skip have certain limitations when they are used in queries against SQL Server 2000. For more information, see the "Skip and Take Exceptions in SQL Server 2000" entry in Troubleshooting. LINQ to SQL translates Skip by using a subquery with the SQL NOT EXISTS clause. This translation has the following limitations:
ExampleThe following example uses Take to select the first five Employees hired. Note that the collection is first sorted by HireDate. C#IQueryable<Employee> firstHiredQuery = (from emp in db.Employees orderby emp.HireDate select emp) .Take(5); foreach (Employee empObj in firstHiredQuery) { Console.WriteLine("{0}, {1}", empObj.EmployeeID, empObj.HireDate); } ExampleThe following example uses Skip to select all except the 10 most expensive Products. C#IQueryable<Product> lessExpensiveQuery = (from prod in db.Products orderby prod.UnitPrice descending select prod) .Skip(10); foreach (Product prodObj in lessExpensiveQuery) { Console.WriteLine(prodObj.ProductName); } ExampleThe following example combines the Skip and Take methods to skip the first 50 records and then return the next 10. C#var custQuery2 = (from cust in db.Customers orderby cust.ContactName select cust) .Skip(50).Take(10); foreach (var custRecord in custQuery2) { Console.WriteLine(custRecord.ContactName); } Take and Skip operations are well defined only against ordered sets. The semantics for unordered sets or multisets is undefined. Because of the limitations on ordering in SQL, LINQ to SQL tries to move the ordering of the argument of the Take or Skip operator to the result of the operator. Note Translation is different for SQL Server 2000 and SQL Server 2005. If you plan to use Skip with a query of any complexity, use SQL Server 2005. Consider the following LINQ to SQL query for SQL Server 2000: C#IQueryable<Customer> custQuery3 = (from custs in db.Customers where custs.City == "London" orderby custs.CustomerID select custs) .Skip(1).Take(1); foreach (var custObj in custQuery3) { Console.WriteLine(custObj.CustomerID); } LINQ to SQL moves the ordering to the end in the SQL code, as follows: SELECT TOP 1 [t0].[CustomerID], [t0].[CompanyName], FROM [Customers] AS [t0] WHERE (NOT (EXISTS( SELECT NULL AS [EMPTY] FROM ( SELECT TOP 1 [t1].[CustomerID] FROM [Customers] AS [t1] WHERE [t1].[City] = @p0 ORDER BY [t1].[CustomerID] ) AS [t2] WHERE [t0].[CustomerID] = [t2].[CustomerID] ))) AND ([t0].[City] = @p1) ORDER BY [t0].[CustomerID] When Take and Skip are chained together, all the specified ordering must be consistent. Otherwise, the results are undefined. For non-negative, constant integral arguments based on the SQL specification, both Take and Skip are well-defined. See alsoSort Elements in a SequenceUse the OrderBy operator to sort a sequence according to one or more keys. Note LINQ to SQL is designed to support ordering by simple primitive types, such as string, int, and so on. It does not support ordering for complex multi-valued classes, such as anonymous types. It also does not support byte datatypes. ExampleThe following example sorts Employees by date of hire. C#IOrderedQueryable<Employee> hireQuery = from emp in db.Employees orderby emp.HireDate select emp; foreach (Employee empObj in hireQuery) { Console.WriteLine("EmpID = {0}, Date Hired = {1}", empObj.EmployeeID, empObj.HireDate); } ExampleThe following example uses where to sort Orders shipped to London by freight. C#IOrderedQueryable<Order> freightQuery = from ord in db.Orders where ord.ShipCity == "London" orderby ord.Freight select ord; foreach (Order ordObj in freightQuery) { Console.WriteLine("Order ID = {0}, Freight = {1}", ordObj.OrderID, ordObj.Freight); } ExampleThe following example sorts Products by unit price from highest to lowest. C#IOrderedQueryable<Product> priceQuery = from prod in db.Products orderby prod.UnitPrice descending select prod; foreach (Product prodObj in priceQuery) { Console.WriteLine("Product ID = {0}, Unit Price = {1}", prodObj.ProductID, prodObj.UnitPrice); } ExampleThe following example uses a compound OrderBy to sort Customers by city and then by contact name. C#IOrderedQueryable<Customer> custQuery = from cust in db.Customers orderby cust.City, cust.ContactName select cust; foreach (Customer custObj in custQuery) { Console.WriteLine("City = {0}, Name = {1}", custObj.City, custObj.ContactName); } ExampleThe following example sorts Orders from EmployeeID 1 by ShipCountry, and then by highest to lowest freight. C#IOrderedQueryable<Order> ordQuery = from ord in db.Orders where ord.EmployeeID == 1 orderby ord.ShipCountry, ord.Freight descending select ord; foreach (Order ordObj in ordQuery) { Console.WriteLine("Country = {0}, Freight = {1}", ordObj.ShipCountry, ordObj.Freight); } ExampleThe following example combines OrderBy, Max, and GroupBy operators to find the Products that have the highest unit price in each category, and then sorts the group by category id. C#var highPriceQuery = from prod in db.Products group prod by prod.CategoryID into grouping orderby grouping.Key select new { grouping.Key, MostExpensiveProducts = from prod2 in grouping where prod2.UnitPrice == grouping.Max(p3 => p3.UnitPrice) select prod2 }; foreach (var prodObj in highPriceQuery) { Console.WriteLine(prodObj.Key); foreach (var listing in prodObj.MostExpensiveProducts) { Console.WriteLine(listing.ProductName); } } If you run the previous query against the Northwind sample database, the results will resemble the following: 1 Côte de Blaye 2 Vegie-spread 3 Sir Rodney's Marmalade 4 Raclette Courdavault 5 Gnocchi di nonna Alice 6 Thüringer Rostbratwurst 7 Manjimup Dried Apples 8 Carnarvon Tigers See alsoGroup Elements in a SequenceThe GroupBy operator groups the elements of a sequence. The following examples use the Northwind database. Note Null column values in GroupBy queries can sometimes throw an InvalidOperationException. For more information, see the "GroupBy InvalidOperationException" section of Troubleshooting. ExampleThe following example partitions Products by CategoryID. C#IQueryable<IGrouping<Int32?, Product>> prodQuery = from prod in db.Products group prod by prod.CategoryID into grouping select grouping; foreach (IGrouping<Int32?, Product> grp in prodQuery) { Console.WriteLine("\nCategoryID Key = {0}:", grp.Key); foreach (Product listing in grp) { Console.WriteLine("\t{0}", listing.ProductName); } } ExampleThe following example uses Max to find the maximum unit price for each CategoryID. C#var q = from p in db.Products group p by p.CategoryID into g select new { g.Key, MaxPrice = g.Max(p => p.UnitPrice) }; ExampleThe following example uses Average to find the average UnitPrice for each CategoryID. C#var q2 = from p in db.Products group p by p.CategoryID into g select new { g.Key, AveragePrice = g.Average(p => p.UnitPrice) }; ExampleThe following example uses Sum to find the total UnitPrice for each CategoryID. C#var priceQuery = from prod in db.Products group prod by prod.CategoryID into grouping select new { grouping.Key, TotalPrice = grouping.Sum(p => p.UnitPrice) }; foreach (var grp in priceQuery) { Console.WriteLine("Category = {0}, Total price = {1}", grp.Key, grp.TotalPrice); } ExampleThe following example uses Count to find the number of discontinued Products in each CategoryID. C#var disconQuery = from prod in db.Products group prod by prod.CategoryID into grouping select new { grouping.Key, NumProducts = grouping.Count(p => p.Discontinued) }; foreach (var prodObj in disconQuery) { Console.WriteLine("CategoryID = {0}, Discontinued# = {1}", prodObj.Key, prodObj.NumProducts); } ExampleThe following example uses a following where clause to find all categories that have at least 10 products. C#var prodCountQuery = from prod in db.Products group prod by prod.CategoryID into grouping where grouping.Count() >= 10 select new { grouping.Key, ProductCount = grouping.Count() }; foreach (var prodCount in prodCountQuery) { Console.WriteLine("CategoryID = {0}, Product count = {1}", prodCount.Key, prodCount.ProductCount); } ExampleThe following example groups products by CategoryID and SupplierID. C#var prodQuery = from prod in db.Products group prod by new { prod.CategoryID, prod.SupplierID } into grouping select new { grouping.Key, grouping }; foreach (var grp in prodQuery) { Console.WriteLine("\nCategoryID {0}, SupplierID {1}", grp.Key.CategoryID, grp.Key.SupplierID); foreach (var listing in grp.grouping) { Console.WriteLine("\t{0}", listing.ProductName); } } ExampleThe following example returns two sequences of products. The first sequence contains products with unit price less than or equal to 10. The second sequence contains products with unit price greater than 10. C#var priceQuery = from prod in db.Products group prod by new { Criterion = prod.UnitPrice > 10 } into grouping select grouping; foreach (var prodObj in priceQuery) { if (prodObj.Key.Criterion == false) Console.WriteLine("Prices 10 or less:"); else Console.WriteLine("\nPrices greater than 10"); foreach (var listing in prodObj) { Console.WriteLine("{0}, {1}", listing.ProductName, listing.UnitPrice); } } ExampleThe GroupBy operator can take only a single key argument. If you need to group by more than one key, you must create an anonymous type, as in the following example: C#var custRegionQuery = from cust in db.Customers group cust.ContactName by new { City = cust.City, Region = cust.Region }; foreach (var grp in custRegionQuery) { Console.WriteLine("\nLocation Key: {0}", grp.Key); foreach (var listing in grp) { Console.WriteLine("\t{0}", listing); } } See alsoEliminate Duplicate Elements from a SequenceUse the Distinct operator to eliminate duplicate elements from a sequence. ExampleThe following example uses Distinct to select a sequence of the unique cities that have customers. C#IQueryable<String> cityQuery = (from cust in db.Customers select cust.City).Distinct(); foreach (String cityString in cityQuery) { Console.WriteLine(cityString); } See alsoDetermine if Any or All Elements in a Sequence Satisfy a ConditionThe All operator returns true if all elements in a sequence satisfy a condition. The Any operator returns true if any element in a sequence satisfies a condition. ExampleThe following example returns a sequence of customers that have at least one order. The Where/where clause evaluates to true if the given Customer has any Order. C#var OrdersQuery = from cust in db.Customers where cust.Orders.Any() select cust; ExampleThe following Visual Basic code determines the list of customers who have not placed orders, and ensures that for every customer in that list, a contact name is provided. VBPublic Sub ContactsAvailable() Dim db As New Northwnd("c:\northwnd.mdf") Dim result = _ (From cust In db.Customers _ Where Not cust.Orders.Any() _ Select cust).All(AddressOf ContactAvailable) If result Then Console.WriteLine _ ("All of the customers who have made no orders have a contact name") Else Console.WriteLine _ ("Some customers who have made no orders have no contact name") End If End Sub Function ContactAvailable(ByVal contact As Object) As Boolean Dim cust As Customer = CType(contact, Customer) Return (cust.ContactTitle Is Nothing OrElse _ cust.ContactTitle.Trim().Length = 0) End Function ExampleThe following C# example returns a sequence of customers whose orders have a ShipCity beginning with "C". Also included in the return are customers who have no orders. (By design, the All operator returns true for an empty sequence.) Customers with no orders are eliminated in the console output by using the Count operator. C#var custEmpQuery = from cust in db.Customers where cust.Orders.All(o => o.ShipCity.StartsWith("C")) orderby cust.CustomerID select cust; foreach (Customer custObj in custEmpQuery) { if (custObj.Orders.Count > 0) Console.WriteLine("CustomerID: {0}", custObj.CustomerID); foreach (Order ordObj in custObj.Orders) { Console.WriteLine("\t OrderID: {0}; ShipCity: {1}", ordObj.OrderID, ordObj.ShipCity); } } See alsoConcatenate Two SequencesUse the Concat operator to concatenate two sequences. The Concat operator is defined for ordered multisets where the orders of the receiver and the argument are the same. Ordering in SQL is the final step before results are produced. For this reason, the Concat operator is implemented by using UNION ALL and does not preserve the order of its arguments. To make sure ordering is correct in the results, make sure to explicitly order the results. ExampleThis example uses Concat to return a sequence of all Customer and Employee telephone and fax numbers. C#IQueryable<String> custQuery = (from cust in db.Customers select cust.Phone) .Concat (from cust in db.Customers select cust.Fax) .Concat (from emp in db.Employees select emp.HomePhone) ; foreach (var custData in custQuery) { Console.WriteLine(custData); } ExampleThis example uses Concat to return a sequence of all Customer and Employee name and telephone number mappings. C#var infoQuery = (from cust in db.Customers select new { Name = cust.CompanyName, cust.Phone } ) .Concat (from emp in db.Employees select new { Name = emp.FirstName + " " + emp.LastName, Phone = emp.HomePhone } ); foreach (var infoData in infoQuery) { Console.WriteLine("Name = {0}, Phone = {1}", infoData.Name, infoData.Phone); } See alsoReturn the Set Difference Between Two SequencesUse the Except operator to return the set difference between two sequences. ExampleThis example uses Except to return a sequence of all countries/regions in which Customers live but in which no Employees live. C#var infoQuery = (from cust in db.Customers select cust.Country) .Except (from emp in db.Employees select emp.Country) ; In LINQ to SQL, the Except operation is well defined only on sets. The semantics for multisets is undefined. See alsoReturn the Set Intersection of Two SequencesUse the Intersect operator to return the set intersection of two sequences. ExampleThis example uses Intersect to return a sequence of all countries/regions in which both Customers and Employees live. C#var infoQuery = (from cust in db.Customers select cust.Country) .Intersect (from emp in db.Employees select emp.Country) ; In LINQ to SQL, the Intersect operation is well defined only on sets. The semantics for multisets is undefined. See alsoReturn the Set Union of Two SequencesUse the Union operator to return the set union of two sequences. ExampleThis example uses Union to return a sequence of all countries/regions in which there are either Customers or Employees. C#var infoQuery = (from cust in db.Customers select cust.Country) .Union (from emp in db.Employees select emp.Country) ; In LINQ to SQL, the Union operator is defined for multisets as the unordered concatenation of the multisets (effectively the result of the UNION ALL clause in SQL). For more info and examples, see Queryable.Union. See alsoConvert a Sequence to an ArrayUse ToArray to create an array from a sequence. ExampleThe following example uses ToArray to immediately evaluate a query into an array and to get the third element. C#var custQuery = from cust in db.Customers where cust.City == "London" select cust; Customer[] qArray = custQuery.ToArray(); See alsoConvert a Sequence to a Generic ListUse ToList to create a generic List from a sequence. ExampleThe following sample uses ToList to immediately evaluate a query into a generic List<T>. C#var empQuery = from emp in db.Employees where emp.HireDate >= new DateTime(1994, 1, 1) select emp; List<Employee> qList = empQuery.ToList(); See alsoConvert a Type to a Generic IEnumerableUse AsEnumerable to return the argument typed as a generic IEnumerable. ExampleIn this example, LINQ to SQL (using the default generic Query) would try to convert the query to SQL and execute it on the server. But the where clause references a user-defined client-side method (isValidProduct), which cannot be converted to SQL. The solution is to specify the client-side generic IEnumerable<T> implementation of where to replace the generic IQueryable<T>. You do this by invoking the AsEnumerable operator. C#private bool isValidProduct(Product prod) { return prod.ProductName.LastIndexOf('C') == 0; } void ConvertToIEnumerable() { Northwnd db = new Northwnd(@"c:\test\northwnd.mdf"); Program pg = new Program(); var prodQuery = from prod in db.Products.AsEnumerable() where isValidProduct(prod) select prod; } See alsoFormulate Joins and Cross-Product QueriesThe following examples show how to combine results from multiple tables. ExampleThe following example uses foreign key navigation in the From clause in Visual Basic (from clause in C#) to select all orders for customers in London. C#var infoQuery = from cust in db.Customers from ord in cust.Orders where cust.City == "London" select ord; ExampleThe following example uses foreign key navigation in the Where clause in Visual Basic (where clause in C#) to filter for out-of-stock Products whose Supplier is in the United States. C#var infoQuery = from prod in db.Products where prod.Supplier.Country == "USA" && prod.UnitsInStock == 0 select prod; ExampleThe following example uses foreign key navigation in the From clause in Visual Basic (from clause in C#) to filter for employees in Seattle and to list their territories. C#var infoQuery = from emp in db.Employees from empterr in emp.EmployeeTerritories where emp.City == "Seattle" select new { emp.FirstName, emp.LastName, empterr.Territory.TerritoryDescription }; ExampleThe following example uses foreign key navigation in the Select clause in Visual Basic (select clause in C#) to filter for pairs of employees where one employee reports to the other and where both employees are from the same City. C#var infoQuery = from emp1 in db.Employees from emp2 in emp1.Employees where emp1.City == emp2.City select new { FirstName1 = emp1.FirstName, LastName1 = emp1.LastName, FirstName2 = emp2.FirstName, LastName2 = emp2.LastName, emp1.City }; ExampleThe following Visual Basic example looks for all customers and orders, makes sure that the orders are matched to customers, and guarantees that for every customer in that list, a contact name is provided. VBDim q1 = From c In db.Customers, o In db.Orders _ Where c.CustomerID = o.CustomerID _ Select c.CompanyName, o.ShipRegion ' Note that because the O/R designer generates class ' hierarchies for database relationships for you, ' the following code has the same effect as the above ' and is shorter: Dim q2 = From c In db.Customers, o In c.Orders _ Select c.CompanyName, o.ShipRegion For Each nextItem In q2 Console.WriteLine("{0} {1}", nextItem.CompanyName, _ nextItem.ShipRegion) Next ExampleThe following example explicitly joins two tables and projects results from both tables. C#var q = from c in db.Customers join o in db.Orders on c.CustomerID equals o.CustomerID into orders select new { c.ContactName, OrderCount = orders.Count() }; ExampleThe following example explicitly joins three tables and projects results from each of them. C#var q = from c in db.Customers join o in db.Orders on c.CustomerID equals o.CustomerID into ords join e in db.Employees on c.City equals e.City into emps select new { c.ContactName, ords = ords.Count(), emps = emps.Count() }; ExampleThe following example shows how to achieve a LEFT OUTER JOIN by using DefaultIfEmpty(). The DefaultIfEmpty() method returns null when there is no Order for the Employee. C#var q = from e in db.Employees join o in db.Orders on e equals o.Employee into ords from o in ords.DefaultIfEmpty() select new { e.FirstName, e.LastName, Order = o }; ExampleThe following example projects a let expression resulting from a join. C#var q = from c in db.Customers join o in db.Orders on c.CustomerID equals o.CustomerID into ords let z = c.City + c.Country from o in ords select new { c.ContactName, o.OrderID, z }; ExampleThe following example shows a join with a composite key. C#var q = from o in db.Orders from p in db.Products join d in db.OrderDetails on new { o.OrderID, p.ProductID } equals new { d.OrderID, d.ProductID } into details from d in details select new { o.OrderID, p.ProductID, d.UnitPrice }; ExampleThe following example shows how to construct a join where one side is nullable and the other is not. C#var q = from o in db.Orders join e in db.Employees on o.EmployeeID equals (int?)e.EmployeeID into emps from e in emps select new { o.OrderID, e.FirstName }; See alsoFormulate ProjectionsThe following examples show how the select statement in C# and Select statement in Visual Basic can be combined with other features to form query projections. ExampleThe following example uses the Select clause in Visual Basic (select clause in C#) to return a sequence of contact names for Customers. C#var nameQuery = from cust in db.Customers select cust.ContactName; ExampleThe following example uses the Select clause in Visual Basic (select clause in C#) and anonymous types to return a sequence of contact names and telephone numbers for Customers. C#var infoQuery = from cust in db.Customers select new { cust.ContactName, cust.Phone }; ExampleThe following example uses the Select clause in Visual Basic (select clause in C#) and anonymous types to return a sequence of names and telephone numbers for employees. The FirstName and LastName fields are combined into a single field (Name), and the HomePhone field is renamed to Phone in the resulting sequence. C#var info2Query = from emp in db.Employees select new { Name = emp.FirstName + " " + emp.LastName, Phone = emp.HomePhone }; ExampleThe following example uses the Select clause in Visual Basic (select clause in C#) and anonymous types to return a sequence of all ProductIDs and a calculated value named HalfPrice. This value is set to the UnitPrice divided by 2. C#var specialQuery = from prod in db.Products select new { prod.ProductID, HalfPrice = prod.UnitPrice / 2 }; ExampleThe following example uses the Select clause in Visual Basic (select clause in C#) and a conditional statement to return a sequence of product name and product availability. C#var prodQuery = from prod in db.Products select new { prod.ProductName, Availability = prod.UnitsInStock - prod.UnitsOnOrder < 0 ? "Out Of Stock" : "In Stock" }; ExampleThe following example uses a Visual Basic Select clause (select clause in C#) and a known type (Name) to return a sequence of the names of employees. C#public class Name { public string FirstName = ""; public string LastName = ""; } void empMethod() { Northwnd db = new Northwnd(@"c:\northwnd.mdf"); var empQuery = from emp in db.Employees select new Name { FirstName = emp.FirstName, LastName = emp.LastName }; } ExampleThe following example uses Select and Where in Visual Basic (select and where in C#) to return a filtered sequence of contact names for customers in London. C#var contactQuery = from cust in db.Customers where cust.City == "London" select cust.ContactName; ExampleThe following example uses a Select clause in Visual Basic (select clause in C#) and anonymous types to return a shaped subset of the data about customers. C#var custQuery = from cust in db.Customers select new { cust.CustomerID, CompanyInfo = new { cust.CompanyName, cust.City, cust.Country }, ContactInfo = new { cust.ContactName, cust.ContactTitle } }; ExampleThe following example uses nested queries to return the following results:
var ordQuery = from ord in db.Orders select new { ord.OrderID, DiscountedProducts = from od in ord.OrderDetails where od.Discount > 0.0 select od, FreeShippingDiscount = ord.Freight }; See alsoHow to: Insert Rows Into the DatabaseYou insert rows into a database by adding objects to the associated LINQ to SQL Table<TEntity> collection and then submitting the changes to the database. LINQ to SQL translates your changes into the appropriate SQL INSERT commands. Note You can override LINQ to SQL default methods for Insert, Update, and Delete database operations. For more information, see Customizing Insert, Update, and Delete Operations. Developers using Visual Studio can use the Object Relational Designer to develop stored procedures for the same purpose. The following steps assume that a valid DataContext connects you to the Northwind database. For more information, see How to: Connect to a Database. To insert a row into the database
ExampleThe following code example creates a new object of type Order and populates it with appropriate values. It then adds the new object to the Order collection. Finally, it submits the change to the database as a new row in the Orders table. C#// Create a new Order object. Order ord = new Order { OrderID = 12000, ShipCity = "Seattle", OrderDate = DateTime.Now // … }; // Add the new object to the Orders collection. db.Orders.InsertOnSubmit(ord); // Submit the change to the database. try { db.SubmitChanges(); } catch (Exception e) { Console.WriteLine(e); // Make some adjustments. // ... // Try again. db.SubmitChanges(); } See also
How to: Update Rows in the DatabaseYou can update rows in a database by modifying member values of the objects associated with the LINQ to SQL Table<TEntity> collection and then submitting the changes to the database. LINQ to SQL translates your changes into the appropriate SQL UPDATE commands. Note You can override LINQ to SQL default methods for Insert, Update, and Delete database operations. For more information, see Customizing Insert, Update, and Delete Operations. Developers using Visual Studio can use the Object Relational Designer to develop stored procedures for the same purpose. The following steps assume that a valid DataContext connects you to the Northwind database. For more information, see How to: Connect to a Database. To update a row in the database
ExampleThe following example queries the database for order #11000, and then changes the values of ShipName and ShipVia in the resulting Order object. Finally, the changes to these member values are submitted to the database as changes in the ShipName and ShipVia columns. C#// Query the database for the row to be updated. var query = from ord in db.Orders where ord.OrderID == 11000 select ord; // Execute the query, and change the column values // you want to change. foreach (Order ord in query) { ord.ShipName = "Mariner"; ord.ShipVia = 2; // Insert any additional changes to column values. } // Submit the changes to the database. try { db.SubmitChanges(); } catch (Exception e) { Console.WriteLine(e); // Provide for exceptions. } See also
How to: Delete Rows From the DatabaseYou can delete rows in a database by removing the corresponding LINQ to SQL objects from their table-related collection. LINQ to SQL translates your changes to the appropriate SQL DELETE commands. LINQ to SQL does not support or recognize cascade-delete operations. If you want to delete a row in a table that has constraints against it, you must complete either of the following tasks:
Otherwise, an exception is thrown. See the second code example later in this topic. Note You can override LINQ to SQL default methods for Insert, Update, and Delete database operations. For more information, see Customizing Insert, Update, and Delete Operations. Developers using Visual Studio can use the Object Relational Designer to develop stored procedures for the same purpose. The following steps assume that a valid DataContext connects you to the Northwind database. For more information, see How to: Connect to a Database. To delete a row in the database
ExampleThis first code example queries the database for order details that belong to Order #11000, marks these order details for deletion, and submits these changes to the database. C#// Query the database for the rows to be deleted. var deleteOrderDetails = from details in db.OrderDetails where details.OrderID == 11000 select details; foreach (var detail in deleteOrderDetails) { db.OrderDetails.DeleteOnSubmit(detail); } try { db.SubmitChanges(); } catch (Exception e) { Console.WriteLine(e); // Provide for exceptions. } ExampleIn this second example, the objective is to remove an order (#10250). The code first examines the OrderDetails table to see whether the order to be removed has children there. If the order has children, first the children and then the order are marked for removal. The DataContext puts the actual deletes in correct order so that delete commands sent to the database abide by the database constraints. C#Northwnd db = new Northwnd(@"c:\northwnd.mdf"); db.Log = Console.Out; // Specify order to be removed from database int reqOrder = 10250; // Fetch OrderDetails for requested order. var ordDetailQuery = from odq in db.OrderDetails where odq.OrderID == reqOrder select odq; foreach (var selectedDetail in ordDetailQuery) { Console.WriteLine(selectedDetail.Product.ProductID); db.OrderDetails.DeleteOnSubmit(selectedDetail); } // Display progress. Console.WriteLine("detail section finished."); Console.ReadLine(); // Determine from Detail collection whether parent exists. if (ordDetailQuery.Any()) { Console.WriteLine("The parent is presesnt in the Orders collection."); // Fetch Order. try { var ordFetch = (from ofetch in db.Orders where ofetch.OrderID == reqOrder select ofetch).First(); db.Orders.DeleteOnSubmit(ordFetch); Console.WriteLine("{0} OrderID is marked for deletion.", ordFetch.OrderID); } catch (Exception e) { Console.WriteLine(e.Message); Console.ReadLine(); } } else { Console.WriteLine("There was no parent in the Orders collection."); } // Display progress. Console.WriteLine("Order section finished."); Console.ReadLine(); try { db.SubmitChanges(); } catch (Exception e) { Console.WriteLine(e.Message); Console.ReadLine(); } // Display progress. Console.WriteLine("Submit finished."); Console.ReadLine(); See also
How to: Submit Changes to the DatabaseRegardless of how many changes you make to your objects, changes are made only to in-memory replicas. You have made no changes to the actual data in the database. Your changes are not transmitted to the server until you explicitly call SubmitChanges on the DataContext. When you make this call, the DataContext tries to translate your changes into equivalent SQL commands. You can use your own custom logic to override these actions, but the order of submission is orchestrated by a service of the DataContext known as the change processor. The sequence of events is as follows:
At this point, any errors detected by the database cause the submission process to stop, and an exception is raised. All changes to the database are rolled back as if no submissions ever occurred. The DataContext still has a full recording of all changes. You can therefore try to correct the problem and call SubmitChanges again, as in the code example that follows. ExampleWhen the transaction around the submission is completed successfully, the DataContext accepts the changes to the objects by ignoring the change-tracking information. C#Northwnd db = new Northwnd(@"c:\northwnd.mdf"); // Make changes here. try { db.SubmitChanges(); } catch (ChangeConflictException e) { Console.WriteLine(e.Message); // Make some adjustments. // ... // Try again. db.SubmitChanges(); } See also
How to: Bracket Data Submissions by Using TransactionsYou can use TransactionScope to bracket your submissions to the database. For more information, see Transaction Support. ExampleThe following code encloses the database submission in a TransactionScope. C#Northwnd db = new Northwnd(@"c:\northwnd.mdf"); using (TransactionScope ts = new TransactionScope()) { try { Product prod1 = db.Products.First(p => p.ProductID == 4); Product prod2 = db.Products.First(p => p.ProductID == 5); prod1.UnitsInStock -= 3; prod2.UnitsInStock -= 5; db.SubmitChanges(); ts.Complete(); } catch (Exception e) { Console.WriteLine(e.Message); } } See alsoHow to: Dynamically Create a DatabaseIn LINQ to SQL, an object model is mapped to a relational database. Mapping is enabled by using attribute-based mapping or an external mapping file to describe the structure of the relational database. In both scenarios, there is enough information about the relational database that you can create a new instance of the database using the DataContext.CreateDatabase method. The DataContext.CreateDatabase method creates a replica of the database only to the extent of the information encoded in the object model. Mapping files and attributes from your object model might not encode everything about the structure of an existing database. Mapping information does not represent the contents of user-defined functions, stored procedures, triggers, or check constraints. This behavior is sufficient for a variety of databases. You can use the DataContext.CreateDatabase method in any number of scenarios, especially if a known data provider like Microsoft SQL Server 2008 is available. Typical scenarios include the following:
You can also use the DataContext.CreateDatabase method with SQL Server by using an .mdf file or a catalog name, depending on your connection string. LINQ to SQL uses the connection string to define the database to be created and on which server the database is to be created. Note Whenever possible, use Windows Integrated Security to connect to the database so that passwords are not required in the connection string. ExampleThe following code provides an example of how to create a new database named MyDVDs.mdf. C#public class MyDVDs : DataContext { public Table<DVD> DVDs; public MyDVDs(string connection) : base(connection) { } } [Table(Name = "DVDTable")] public class DVD { [Column(IsPrimaryKey = true)] public string Title; [Column] public string Rating; } ExampleYou can use the object model to create a database by doing the following: C#public void CreateDatabase() { MyDVDs db = new MyDVDs("c:\\mydvds.mdf"); db.CreateDatabase(); } ExampleWhen building an application that automatically installs itself on a customer system, see if the database already exists and drop it before creating a new one. The DataContext class provides the DatabaseExists and DeleteDatabase methods to help you with this process. The following example shows one way these methods can be used to implement this approach: C#public void CreateDatabase2() { MyDVDs db = new MyDVDs(@"c:\mydvds.mdf"); if (db.DatabaseExists()) { Console.WriteLine("Deleting old database..."); db.DeleteDatabase(); } db.CreateDatabase(); } See also
How to: Manage Change ConflictsLINQ to SQL provides a collection of APIs to help you discover, evaluate, and resolve concurrency conflicts. In This Section
How to: Detect and Resolve Conflicting Submissions
How to: Specify When Concurrency Exceptions are Thrown
How to: Specify Which Members are Tested for Concurrency Conflicts
How to: Retrieve Entity Conflict Information
How to: Retrieve Member Conflict Information
How to: Resolve Conflicts by Retaining Database Values
How to: Resolve Conflicts by Overwriting Database Values
How to: Resolve Conflicts by Merging with Database Values Related Sections
Optimistic Concurrency: Overview How to: Detect and Resolve Conflicting SubmissionsLINQ to SQL provides many resources for detecting and resolving conflicts that stem from multi-user changes to the database. For more information, see How to: Manage Change Conflicts. ExampleThe following example shows a try/catch block that catches a ChangeConflictException exception. Entity and member information for each conflict is displayed in the console window. Note You must include the using System.Reflection directive (Imports System.Reflection in Visual Basic) to support the information retrieval. For more information, see System.Reflection. C#// using System.Reflection; Northwnd db = new Northwnd(@"c:\northwnd.mdf"); Customer newCust = new Customer(); newCust.City = "Auburn"; newCust.CustomerID = "AUBUR"; newCust.CompanyName = "AubCo"; db.Customers.InsertOnSubmit(newCust); try { db.SubmitChanges(ConflictMode.ContinueOnConflict); } catch (ChangeConflictException e) { Console.WriteLine("Optimistic concurrency error."); Console.WriteLine(e.Message); Console.ReadLine(); foreach (ObjectChangeConflict occ in db.ChangeConflicts) { MetaTable metatable = db.Mapping.GetTable(occ.Object.GetType()); Customer entityInConflict = (Customer)occ.Object; Console.WriteLine("Table name: {0}", metatable.TableName); Console.Write("Customer ID: "); Console.WriteLine(entityInConflict.CustomerID); foreach (MemberChangeConflict mcc in occ.MemberConflicts) { object currVal = mcc.CurrentValue; object origVal = mcc.OriginalValue; object databaseVal = mcc.DatabaseValue; MemberInfo mi = mcc.Member; Console.WriteLine("Member: {0}", mi.Name); Console.WriteLine("current value: {0}", currVal); Console.WriteLine("original value: {0}", origVal); Console.WriteLine("database value: {0}", databaseVal); } } } catch (Exception ee) { // Catch other exceptions. Console.WriteLine(ee.Message); } finally { Console.WriteLine("TryCatch block has finished."); } See alsoHow to: Specify When Concurrency Exceptions are ThrownIn LINQ to SQL, a ChangeConflictException exception is thrown when objects do not update because of optimistic concurrency conflicts. For more information, see Optimistic Concurrency: Overview. Before you submit your changes to the database, you can specify when concurrency exceptions should be thrown:
When thrown, the ChangeConflictException exception provides access to a ChangeConflictCollection collection. This collection provides details for each conflict (mapped to a single failed update try), including access to the MemberConflicts collection. Each member conflict maps to a single member in the update that failed the concurrency check. ExampleThe following code shows examples of both values. C#Northwnd db = new Northwnd("..."); // Create, update, delete code. db.SubmitChanges(ConflictMode.FailOnFirstConflict); // or db.SubmitChanges(ConflictMode.ContinueOnConflict); See alsoHow to: Specify Which Members are Tested for Concurrency ConflictsApply one of three enums to the LINQ to SQL UpdateCheck property on a ColumnAttribute attribute to specify which members are to be included in update checks for the detection of optimistic concurrency conflicts. The UpdateCheck property (mapped at design time) is used together with run-time concurrency features in LINQ to SQL. For more information, see Optimistic Concurrency: Overview. Note Original member values are compared with the current database state as long as no member is designated as IsVersion=true. For more information, see IsVersion. For code examples, see UpdateCheck. To always use this member for detecting conflicts
To never use this member for detecting conflicts
To use this member for detecting conflicts only when the application has changed the value of the member
ExampleThe following example specifies that HomePage objects should never be tested during update checks. For more information, see UpdateCheck. C#[Column(Storage="_HomePage", DbType="NText", UpdateCheck=UpdateCheck.Never)] public string HomePage { get { return this._HomePage; } set { if ((this._HomePage != value)) { this.OnHomePageChanging(value); this.SendPropertyChanging(); this._HomePage = value; this.SendPropertyChanged("HomePage"); this.OnHomePageChanged(); } } } See alsoHow to: Retrieve Entity Conflict InformationYou can use objects of the ObjectChangeConflict class to provide information about conflicts revealed by ChangeConflictException exceptions. For more information, see Optimistic Concurrency: Overview. ExampleThe following example iterates through a list of accumulated conflicts. C#Northwnd db = new Northwnd("..."); try { db.SubmitChanges(ConflictMode.ContinueOnConflict); } catch (ChangeConflictException e) { Console.WriteLine("Optimistic concurrency error."); Console.WriteLine(e.Message); foreach (ObjectChangeConflict occ in db.ChangeConflicts) { MetaTable metatable = db.Mapping.GetTable(occ.Object.GetType()); Customer entityInConflict = (Customer)occ.Object; Console.WriteLine("Table name: {0}", metatable.TableName); Console.Write("Customer ID: "); Console.WriteLine(entityInConflict.CustomerID); Console.ReadLine(); } } See alsoHow to: Retrieve Member Conflict InformationYou can use the MemberChangeConflict class to retrieve information about individual members in conflict. In this same context you can provide for custom handling of the conflict for any member. For more information, see Optimistic Concurrency: Overview. ExampleThe following code iterates through the ObjectChangeConflict objects. For each object, it then iterates through the MemberChangeConflict objects. Note Include System.Reflection in order to provide Member information. C#// Add 'using System.Reflection' for this section. Northwnd db = new Northwnd("..."); try { db.SubmitChanges(ConflictMode.ContinueOnConflict); } catch (ChangeConflictException e) { Console.WriteLine("Optimistic concurrency error."); Console.WriteLine(e.Message); foreach (ObjectChangeConflict occ in db.ChangeConflicts) { MetaTable metatable = db.Mapping.GetTable(occ.Object.GetType()); Customer entityInConflict = (Customer)occ.Object; Console.WriteLine("Table name: {0}", metatable.TableName); Console.Write("Customer ID: "); Console.WriteLine(entityInConflict.CustomerID); foreach (MemberChangeConflict mcc in occ.MemberConflicts) { object currVal = mcc.CurrentValue; object origVal = mcc.OriginalValue; object databaseVal = mcc.DatabaseValue; MemberInfo mi = mcc.Member; Console.WriteLine("Member: {0}", mi.Name); Console.WriteLine("current value: {0}", currVal); Console.WriteLine("original value: {0}", origVal); Console.WriteLine("database value: {0}", databaseVal); Console.ReadLine(); } } } See alsoHow to: Resolve Conflicts by Retaining Database ValuesTo reconcile differences between expected and actual database values before you try to resubmit your changes, you can use OverwriteCurrentValues to retain the values found in the database. The current values in the object model are then overwritten. For more information, see Optimistic Concurrency: Overview. Note In all cases, the record on the client is first refreshed by retrieving the updated data from the database. This action makes sure that the next update try will not fail on the same concurrency checks. ExampleIn this scenario, a ChangeConflictException exception is thrown when User1 tries to submit changes, because User2 has in the meantime changed the Assistant and Department columns. The following table shows the situation.
User1 decides to resolve this conflict by having the newer database values overwrite the current values in the object model. When User1 resolves the conflict by using OverwriteCurrentValues, the result in the database is as follows in the table:
The following example code shows how to overwrite current values in the object model with the database values. (No inspection or custom handling of individual member conflicts occurs.) C#Northwnd db = new Northwnd("..."); try { db.SubmitChanges(ConflictMode.ContinueOnConflict); } catch (ChangeConflictException e) { Console.WriteLine(e.Message); foreach (ObjectChangeConflict occ in db.ChangeConflicts) { // All database values overwrite current values. occ.Resolve(RefreshMode.OverwriteCurrentValues); } } See alsoHow to: Resolve Conflicts by Overwriting Database ValuesTo reconcile differences between expected and actual database values before you try to resubmit your changes, you can use KeepCurrentValues to overwrite database values. For more information, see Optimistic Concurrency: Overview. Note In all cases, the record on the client is first refreshed by retrieving the updated data from the database. This action makes sure that the next update try will not fail on the same concurrency checks. ExampleIn this scenario, an ChangeConflictException exception is thrown when User1 tries to submit changes, because User2 has in the meantime changed the Assistant and Department columns. The following table shows the situation.
User1 decides to resolve this conflict by overwriting database values with the current client member values. When User1 resolves the conflict by using KeepCurrentValues, the result in the database is as in following table:
The following example code shows how to overwrite database values with the current client member values. (No inspection or custom handling of individual member conflicts occurs.) C#try { db.SubmitChanges(ConflictMode.ContinueOnConflict); } catch (ChangeConflictException e) { Console.WriteLine(e.Message); foreach (ObjectChangeConflict occ in db.ChangeConflicts) { //No database values are merged into current. occ.Resolve(RefreshMode.KeepCurrentValues); } } See alsoHow to: Resolve Conflicts by Merging with Database ValuesTo reconcile differences between expected and actual database values before you try to resubmit your changes, you can use KeepChanges to merge database values with the current client member values. For more information, see Optimistic Concurrency: Overview. Note In all cases, the record on the client is first refreshed by retrieving the updated data from the database. This action makes sure that the next update try will not fail on the same concurrency checks. ExampleIn this scenario, a ChangeConflictException exception is thrown when User1 tries to submit changes, because User2 has in the meantime changed the Assistant and Department columns. The following table shows the situation.
User1 decides to resolve this conflict by merging database values with the current client member values. The result will be that database values are overwritten only when the current changeset has also modified that value. When User1 resolves the conflict by using KeepChanges, the result in the database is as in the following table:
The following example shows how to merge database values with the current client member values (unless the client has also changed that value). No inspection or custom handling of individual member conflicts occurs. C#try { db.SubmitChanges(ConflictMode.ContinueOnConflict); } catch (ChangeConflictException e) { Console.WriteLine(e.Message); // Automerge database values for members that client // has not modified. foreach (ObjectChangeConflict occ in db.ChangeConflicts) { occ.Resolve(RefreshMode.KeepChanges); } } // Submit succeeds on second try. db.SubmitChanges(ConflictMode.FailOnFirstConflict); See also
Debugging SupportLINQ to SQL provides general debugging support for LINQ to SQL projects. Also see Debugging LINQ or Debugging LINQ. LINQ to SQL also provides special tools for viewing SQL code. For more information, see the topics in this section. In This Section
How to: Display Generated SQL
How to: Display a ChangeSet
How to: Display LINQ to SQL Commands
Troubleshooting See alsoHow to: Display Generated SQLYou can view the SQL code generated for queries and change processing by using the Log property. This approach can be useful for understanding LINQ to SQL functionality and for debugging specific problems. ExampleThe following example uses the Log property to display SQL code in the console window before the code is executed. You can use this property with query, insert, update, and delete commands. The lines from the console window are what you see when you execute the Visual Basic or C# code that follows. SELECT [t0].[CustomerID], [t0].[CompanyName], [t0].[ContactName], [t0].[ContactT itle], [t0].[Address], [t0].[City], [t0].[Region], [t0].[PostalCode], [t0].[Coun try], [t0].[Phone], [t0].[Fax] FROM [dbo].[Customers] AS [t0] WHERE [t0].[City] = @p0 -- @p0: Input String (Size = 6; Prec = 0; Scale = 0) [London] -- Context: SqlProvider(Sql2005) Model: AttributedMetaModel Build: 3.5.20810.0 AROUT BSBEV CONSH EASTC NORTS SEVESC# db.Log = Console.Out; IQueryable<Customer> custQuery = from cust in db.Customers where cust.City == "London" select cust; foreach(Customer custObj in custQuery) { Console.WriteLine(custObj.CustomerID); } See alsoHow to: Display a ChangeSetYou can view changes tracked by a DataContext by using GetChangeSet. ExampleThe following example retrieves customers whose city is London, changes the city to Paris, and submits the changes back to the database. C#Northwnd db = new Northwnd(@"c:\northwnd.mdf"); var custQuery = from cust in db.Customers where cust.City == "London" select cust; foreach (Customer custObj in custQuery) { Console.WriteLine("CustomerID: {0}", custObj.CustomerID); Console.WriteLine("\tOriginal value: {0}", custObj.City); custObj.City = "Paris"; Console.WriteLine("\tUpdated value: {0}", custObj.City); } ChangeSet cs = db.GetChangeSet(); Console.Write("Total changes: {0}", cs); // Freeze the console window. Console.ReadLine(); db.SubmitChanges(); Output from this code appears similar to the following. Note that the summary at the end shows that eight changes were made. consoleCustomerID: AROUT Original value: London Updated value: Paris CustomerID: BSBEV Original value: London Updated value: Paris CustomerID: CONSH Original value: London Updated value: Paris CustomerID: EASTC Original value: London Updated value: Paris CustomerID: NORTS Original value: London Updated value: Paris CustomerID: PARIS Original value: London Updated value: Paris CustomerID: SEVES Original value: London Updated value: Paris CustomerID: SPECD Original value: London Updated value: Paris Total changes: {Added: 0, Removed: 0, Modified: 8} See alsoHow to: Display LINQ to SQL CommandsUse GetCommand to display SQL commands and other information. ExampleIn the following example, the console window displays the output from the query, followed by the SQL commands that are generated, the type of commands, and the type of connection. C#// using System.Data.Common; Northwnd db = new Northwnd(@"c:\northwnd.mdf"); var q = from cust in db.Customers where cust.City == "London" select cust; Console.WriteLine("Customers from London:"); foreach (var z in q) { Console.WriteLine("\t {0}",z.ContactName); } DbCommand dc = db.GetCommand(q); Console.WriteLine("\nCommand Text: \n{0}",dc.CommandText); Console.WriteLine("\nCommand Type: {0}",dc.CommandType); Console.WriteLine("\nConnection: {0}",dc.Connection); Console.ReadLine(); Output appears as follows: Customers from London: Thomas Hardy Victoria Ashworth Elizabeth Brown Ann Devon Simon Crowther Marie Bertrand Hari Kumar Dominique Perrier Command Text: SELECT [t0].[CustomerID], [t0].[CompanyName], [t0].[ContactName], [t0].[ContactT itle], [t0].[Address], [t0].[City], [t0].[Region], [t0].[PostalCode], [t0].[Coun try], [t0].[Phone], [t0].[Fax] FROM [dbo].[Customers] AS [t0] WHERE [t0].[City] = @p0 Command Type: Text Connection: System.Data.SqlClient.SqlConnection See alsoTroubleshootingThe following information exposes some issues you might encounter in your LINQ to SQL applications, and provides suggestions to avoid or otherwise reduce the effect of these issues. Additional issues are addressed in Frequently Asked Questions. Unsupported Standard Query OperatorsLINQ to SQL does not support all standard query operator methods (for example, ElementAt). As a result, projects that compile can still produce run-time errors. For more information, see Standard Query Operator Translation. Memory IssuesIf a query involves an in-memory collection and LINQ to SQL Table<TEntity>, the query might be executed in memory, depending on the order in which the two collections are specified. If the query must be executed in memory, then the data from the database table will need to be retrieved. This approach is inefficient and could result in significant memory and processor usage. Try to avoid such multi-domain queries. File Names and SQLMetalTo specify an input file name, add the name to the command line as the input file. Including the file name in the connection string (using the /conn option) is not supported. For more information, see SqlMetal.exe (Code Generation Tool). Class Library ProjectsThe Object Relational Designer creates a connection string in the app.config file of the project. In class library projects, the app.config file is not used. LINQ to SQL uses the Connection String provided in the design-time files. Changing the value in app.config does not change the database to which your application connects. Cascade DeleteLINQ to SQL does not support or recognize cascade-delete operations. If you want to delete a row in a table that has constraints against it, you must do either of the following:
Otherwise, a SqlException exception is thrown. For more information, see How to: Delete Rows From the Database. Expression Not QueryableIf you get the "Expression [expression] is not queryable; are you missing an assembly reference?" error, make sure of the following:
DuplicateKeyExceptionIn the course of debugging a LINQ to SQL project, you might traverse an entity's relations. Doing so brings these items into the cache, and LINQ to SQL becomes aware of their presence. If you then try to execute Attach or InsertOnSubmit or a similar method that produces multiple rows that have the same key, a DuplicateKeyException is thrown. String Concatenation ExceptionsConcatenation on operands mapped to [n]text and other [n][var]char is not supported. An exception is thrown for concatenation of strings mapped to the two different sets of types. For more information, see System.String Methods. Skip and Take Exceptions in SQL Server 2000You must use identity members (IsPrimaryKey) when you use Take or Skip against a SQL Server 2000 database. The query must be against a single table (that is, not a join), or be a Distinct, Except, Intersect, or Union operation, and must not include a Concat operation. For more information, see the "SQL Server 2000 Support" section in Standard Query Operator Translation. This requirement does not apply to SQL Server 2005. GroupBy InvalidOperationExceptionThis exception is thrown when a column value is null in a GroupBy query that groups by a boolean expression, such as group x by (Phone==@phone). Because the expression is a boolean, the key is inferred to be boolean, not nullable boolean. When the translated comparison produces a null, an attempt is made to assign a nullable boolean to a boolean, and the exception is thrown. To avoid this situation (assuming you want to treat nulls as false), use an approach such as the following: GroupBy="(Phone != null) && (Phone=@Phone)" OnCreated() Partial MethodThe generated method OnCreated() is called each time the object constructor is called, including the scenario in which LINQ to SQL calls the constructor to make a copy for original values. Take this behavior into account if you implement the OnCreated() method in your own partial class. See alsoBackground InformationThe topics in this section pertain to concepts and procedures that extend beyond the basics about using LINQ to SQL. Follow these steps to find additional examples of LINQ to SQL code and applications:
In This Section
ADO.NET and LINQ to SQL
Analyzing LINQ to SQL Source Code
Customizing Insert, Update, and Delete Operations
Data Binding
Inheritance Support
Local Method Calls
N-Tier and Remote Applications with LINQ to SQL
Object Identity
The LINQ to SQL Object Model
Object States and Change-Tracking
Optimistic Concurrency: Overview
Query Concepts
Retrieving Objects from the Identity Cache
Security in LINQ to SQL
Serialization
Stored Procedures
Transaction Support
SQL-CLR Type Mismatches
SQL-CLR Custom Type Mappings
User-Defined Functions Related Sections
Programming Guide ADO.NET and LINQ to SQLLINQ to SQL is part of the ADO.NET family of technologies. It is based on services provided by the ADO.NET provider model. You can therefore mix LINQ to SQL code with existing ADO.NET applications and migrate current ADO.NET solutions to LINQ to SQL. The following illustration provides a high-level view of the relationship.
ConnectionsYou can supply an existing ADO.NET connection when you create a LINQ to SQL DataContext. All operations against the DataContext (including queries) use this provided connection. If the connection is already open, LINQ to SQL leaves it as is when you are finished with it. C#string connString = @"Data Source=.\SQLEXPRESS;AttachDbFilename=c:\northwind.mdf; Integrated Security=True; Connect Timeout=30; User Instance=True"; SqlConnection nwindConn = new SqlConnection(connString); nwindConn.Open(); Northwnd interop_db = new Northwnd(nwindConn); SqlTransaction nwindTxn = nwindConn.BeginTransaction(); try { SqlCommand cmd = new SqlCommand( "UPDATE Products SET QuantityPerUnit = 'single item' WHERE ProductID = 3"); cmd.Connection = nwindConn; cmd.Transaction = nwindTxn; cmd.ExecuteNonQuery(); interop_db.Transaction = nwindTxn; Product prod1 = interop_db.Products .First(p => p.ProductID == 4); Product prod2 = interop_db.Products .First(p => p.ProductID == 5); prod1.UnitsInStock -= 3; prod2.UnitsInStock -= 5; interop_db.SubmitChanges(); nwindTxn.Commit(); } catch (Exception e) { Console.WriteLine(e.Message); Console.WriteLine("Error submitting changes... all changes rolled back."); } nwindConn.Close(); You can always access the connection and close it yourself by using the Connection property, as in the following code: C#db.Connection.Close(); TransactionsYou can supply your DataContext with your own database transaction when your application has already initiated the transaction and you want your DataContext to be involved. The preferred method of doing transactions with the .NET Framework is to use the TransactionScope object. By using this approach, you can make distributed transactions that work across databases and other memory-resident resource managers. Transaction scopes require few resources to start. They promote themselves to distributed transactions only when there are multiple connections within the scope of the transaction. C#using (TransactionScope ts = new TransactionScope()) { db.SubmitChanges(); ts.Complete(); } You cannot use this approach for all databases. For example, the SqlClient connection cannot promote system transactions when it works against a SQL Server 2000 server. Instead, it automatically enlists to a full, distributed transaction whenever it sees a transaction scope being used. Direct SQL CommandsAt times you can encounter situations where the ability of the DataContext to query or submit changes is insufficient for the specialized task you want to perform. In these circumstances you can use the ExecuteQuery method to issue SQL commands to the database and convert the query results to objects. For example, assume that the data for the Customer class is spread over two tables (customer1 and customer2). The following query returns a sequence of Customer objects: C#IEnumerable<Customer> results = db.ExecuteQuery<Customer>( @"select c1.custid as CustomerID, c2.custName as ContactName from customer1 as c1, customer2 as c2 where c1.custid = c2.custid" ); As long as the column names in the tabular results match column properties of your entity class, LINQ to SQL creates your objects out of any SQL query. ParametersThe ExecuteQuery method accepts parameters. The following code executes a parameterized query: C#IEnumerable<Customer> results = db.ExecuteQuery<Customer>( "select contactname from customers where city = {0}", "London" ); Note Parameters are expressed in the query text by using the same curly notation used by Console.WriteLine() and String.Format(). String.Format() takes the query string you provide and substitutes the curly-braced parameters with generated parameter names such as @p0, @p1 …, @p(n). See alsoAnalyzing LINQ to SQL Source CodeBy using the following steps, you can produce LINQ to SQL source code from the Northwind sample database. You can compare elements of the object model with elements of the database to better see how different items are mapped. Note Developers using Visual Studio can use the O/R Designer to produce this code.
See alsoCustomizing Insert, Update, and Delete OperationsBy default, LINQ to SQL generates dynamic SQL to implement insert, read, update, and delete operations. In practice, however, you typically customize your application to suit your business needs. Note If you are using Visual Studio, you can use the Object Relational Designer to customize insert, update, and delete actions. This section of topics describes the techniques that LINQ to SQL provides for customizing insert, read, update, and delete operations in your application. In This Section
Customizing Operations: Overview
Insert, Update, and Delete Operations
Responsibilities of the Developer In Overriding Default Behavior
Adding Business Logic By Using Partial Methods Customizing Operations: OverviewBy default, LINQ to SQL generates dynamic SQL for insert, update, and delete operations based on mapping. However, in practice you typically want to add your own business logic to provide for security, validation, and so forth. LINQ to SQL techniques for customizing these operations include the following. Loading OptionsIn your queries, you can control how much data related to your main target is retrieved when you connect to the database. This functionality is implemented largely by using DataLoadOptions. For more information, see Deferred versus Immediate Loading. Partial MethodsIn its default mapping, LINQ to SQL provides partial methods to help you implement your business logic. For more information, see Adding Business Logic By Using Partial Methods. Stored Procedures and User-Defined FunctionsLINQ to SQL supports the use of stored procedures and user-defined functions. Stored procedures are frequently used to customize operations. For more information, see Stored Procedures. See alsoInsert, Update, and Delete OperationsYou perform Insert, Update, and Delete operations in LINQ to SQL by adding, changing, and removing objects in your object model. By default, LINQ to SQL translates your actions to SQL and submits the changes to the database. LINQ to SQL offers maximum flexibility in manipulating and persisting changes that you made to your objects. As soon as entity objects are available (either by retrieving them through a query or by constructing them anew), you can change them as typical objects in your application. That is, you can change their values, you can add them to your collections, and you can remove them from your collections. LINQ to SQL tracks your changes and is ready to transmit them back to the database when you call SubmitChanges. Note LINQ to SQL does not support or recognize cascade-delete operations. If you want to delete a row in a table that has constraints against it, you must either set the ON DELETE CASCADE rule in the foreign-key constraint in the database, or use your own code to first delete the child objects that prevent the parent object from being deleted. Otherwise, an exception is thrown. For more information, see How to: Delete Rows From the Database. The following excerpts use the Customer and Order classes from the Northwind sample database. Class definitions are not shown for brevity. C#Northwnd db = new Northwnd(@"c:\Northwnd.mdf"); // Query for a specific customer. var cust = (from c in db.Customers where c.CustomerID == "ALFKI" select c).First(); // Change the name of the contact. cust.ContactName = "New Contact"; // Create and add a new Order to the Orders collection. Order ord = new Order { OrderDate = DateTime.Now }; cust.Orders.Add(ord); // Delete an existing Order. Order ord0 = cust.Orders[0]; // Removing it from the table also removes it from the Customer’s list. db.Orders.DeleteOnSubmit(ord0); // Ask the DataContext to save all the changes. db.SubmitChanges(); When you call SubmitChanges, LINQ to SQL automatically generates and executes the SQL commands that it must have to transmit your changes back to the database. Note You can override this behavior by using your own custom logic, typically by way of a stored procedure. For more information, see Responsibilities of the Developer In Overriding Default Behavior. Developers using Visual Studio can use the Object Relational Designer to develop stored procedures for this purpose. See alsoResponsibilities of the Developer In Overriding Default BehaviorLINQ to SQL does not enforce the following requirements, but behavior is undefined if these requirements are not satisfied.
See alsoAdding Business Logic By Using Partial MethodsYou can customize Visual Basic and C# generated code in your LINQ to SQL projects by using partial methods. The code that LINQ to SQL generates defines signatures as one part of a partial method. If you want to implement the method, you can add your own partial method. If you do not add your own implementation, the compiler discards the partial methods signature and calls the default methods in LINQ to SQL. Note If you are using Visual Studio, you can use the Object Relational Designer to add validation and other customizations to entity classes. For example, the default mapping for the Customer class in the Northwind sample database includes the following partial method: C#partial void OnAddressChanged(); You can implement your own method by adding code such as the following to your own partial Customer class: C#public partial class Customer { partial void OnAddressChanged(); partial void OnAddressChanged() { // Insert business logic here. } } This approach is typically used in LINQ to SQL to override default methods for Insert, Update, Delete, and to validate properties during object life-cycle events. For more information, see Partial Methods (Visual Basic) or partial (Method) (C# Reference) (C#). ExampleDescriptionThe following example shows ExampleClass first as it might be defined by a code-generating tool such as SQLMetal, and then how you might implement only one of the two methods. CodeC#// Code-generating tool defines a partial class, including // two partial methods. partial class ExampleClass { partial void onFindingMaxOutput(); partial void onFindingMinOutput(); } // Developer implements one of the partial methods. Compiler // discards the signature of the other method. partial class ExampleClass { partial void onFindingMaxOutput() { Console.WriteLine("Maximum has been found."); } } ExampleDescriptionThe following example uses the relationship between Shipper and Order entities. Note among the methods the partial methods, InsertShipper and DeleteShipper. These methods override the default partial methods supplied by LINQ to SQL mapping. CodeC#public static int LoadOrdersCalled = 0; private IEnumerable<Order> LoadOrders(Shipper shipper) { LoadOrdersCalled++; return this.Orders.Where(o => o.ShipVia == shipper.ShipperID); } public static int LoadShipperCalled = 0; private Shipper LoadShipper(Order order) { LoadShipperCalled++; return this.Shippers.Single(s => s.ShipperID == order.ShipVia); } public static int InsertShipperCalled = 0; partial void InsertShipper(Shipper shipper) { InsertShipperCalled++; // Call a Web service to perform an insert operation. InsertShipperService(shipper); } public static int UpdateShipperCalled = 0; private void UpdateShipper(Shipper original, Shipper current) { Shipper shipper = new Shipper(); UpdateShipperCalled++; // Call a Web service to update shipper. InsertShipperService(shipper); } public static bool DeleteShipperCalled; partial void DeleteShipper(Shipper shipper) { DeleteShipperCalled = true; } See alsoData BindingLINQ to SQL supports binding to common controls, such as grid controls. Specifically, LINQ to SQL defines the basic patterns for binding to a data grid and handling master-detail binding, both with regard to display and updating. Underlying PrincipleLINQ to SQL translates LINQ queries to SQL for execution on a database. The results are strongly typed IEnumerable. Because these objects are ordinary common language runtime (CLR) objects, ordinary object data binding can be used to display the results. On the other hand, change operations (inserts, updates, and deletes) require additional steps. OperationImplicitly binding to Windows Forms controls is accomplished by implementing IListSource. Data sources generic Table<TEntity> (Table<T> in C# or Table(Of T) in Visual Basic) and generic DataQuery have been updated to implement IListSource. User interface (UI) data-binding engines (Windows Forms and Windows Presentation Foundation) both test whether their data source implements IListSource. Therefore, writing a direct affectation of a query to a data source of a control implicitly calls LINQ to SQL collection generation, as in the following example: C#DataGrid dataGrid1 = new DataGrid(); DataGrid dataGrid2 = new DataGrid(); DataGrid dataGrid3 = new DataGrid(); var custQuery = from cust in db.Customers select cust; dataGrid1.DataSource = custQuery; dataGrid2.DataSource = custQuery; dataGrid2.DataMember = "Orders"; BindingSource bs = new BindingSource(); bs.DataSource = custQuery; dataGrid3.DataSource = bs; The same occurs with Windows Presentation Foundation: C#ListView listView1 = new ListView(); var custQuery2 = from cust in db.Customers select cust; ListViewItem ItemsSource = new ListViewItem(); ItemsSource = (ListViewItem)custQuery2; Collection generations are implemented by generic Table<TEntity> and generic DataQuery in GetList. IListSource ImplementationLINQ to SQL implements IListSource in two locations:
Specialized CollectionsFor many features described earlier in this document, BindingList<T> has been specialized to some different classes. These classes are generic SortableBindingList and generic DataBindingList. Both are declared as internal. Generic SortableBindingListThis class inherits from BindingList<T>, and is a sortable version of BindingList<T>. Sorting is an in-memory solution and never contacts the database itself. BindingList<T> implements IBindingList but does not support sorting by default. However, BindingList<T> implements IBindingList with virtual core methods. You can easily override these methods. Generic SortableBindingList overrides SupportsSortingCore, SortPropertyCore, SortDirectionCore, and ApplySortCore. ApplySortCore is called by ApplySort and sorts the list of T items for a given property. An exception is raised if the property does not belong to T. To achieve sorting, LINQ to SQL creates a generic SortableBindingList.PropertyComparer class that inherits from generic IComparer.Compare and implements a default comparer for a given type T, a PropertyDescriptor, and a direction. This class dynamically creates a Comparer of T where T is the PropertyType of the PropertyDescriptor. Then, the default comparer is retrieved from the static generic Comparer. A default instance is obtained by using reflection. Generic SortableBindingList is also the base class for DataBindingList. Generic SortableBindingList offers two virtual methods for suspending or resuming items add/remove tracking. Those two methods can be used for base features such as sorting, but will really be implemented by upper classes like generic DataBindingList. Generic DataBindingListThis class inherits from generic SortableBindingLIst. Generic DataBindingList keeps a reference on the underlying generic Table of the generic IQueryable used for the initial filling of the collection. Generic DatabindingList adds tracking for item add/remove to the collection by overriding InsertItem() and RemoveItem(). It also implements the abstract suspend/resume tracking feature to make tracking conditional. This feature makes generic DataBindingList take advantage of all the polymorphic usage of the tracking feature of the parent classes. Binding to EntitySetsBinding to EntitySet is a special case because EntitySet is already a collection that implements IBindingList. LINQ to SQL adds sorting and canceling (ICancelAddNew) support. An EntitySet class uses an internal list to store entities. This list is a low-level collection based on a generic array, the generic ItemList class. Adding a Sorting FeatureArrays offer a sort method (Array.Sort()) that you can be used with a Comparer of T. LINQ to SQL uses the generic SortableBindingList.PropertyComparer class described earlier in this topic to obtain this Comparer for the property and the direction to be sorted on. An ApplySort method is added to generic ItemList to call this feature. On the EntitySet side, you now have to declare sorting support:
When you use a System.Windows.Forms.BindingSource and bind an EntitySet<TEntity> to the System.Windows.Forms.BindingSource.DataSource, you must call EntitySet<TEntity>.GetNewBindingList to update BindingSource.List. If you use a System.Windows.Forms.BindingSource and set the BindingSource.DataMember property and set BindingSource.DataSource to a class that has a property named in the BindingSource.DataMember that exposes the EntitySet<TEntity>, you don’t have to call EntitySet<TEntity>.GetNewBindingList to update the BindingSource.List but you lose Sorting capability. CachingLINQ to SQL queries implement GetList. When the Windows Forms BindingSource class meets this interface, it calls GetList() threes time for a single connection. To work around this situation, LINQ to SQL implements a cache per instance to store and always return the same generated collection. CancellationIBindingList defines an AddNew method that is used by controls to create a new item from a bound collection. The DataGridView control shows this feature very well when the last visible row contains a star in its header. The star shows you that you can add a new item. In addition to this feature, a collection can also implement ICancelAddNew. This feature allows for the controls to cancel or validate that the new edited item has been validated or not. ICancelAddNew is implemented in all LINQ to SQL databound collections (generic SortableBindingList and generic EntitySet). In both implementations the code performs as follows:
TroubleshootingThis section calls out several items that might help troubleshoot your LINQ to SQL data binding applications.
See alsoInheritance SupportLINQ to SQL supports single-table mapping. In other words, a complete inheritance hierarchy is stored in a single database table. The table contains the flattened union of all possible data columns for the whole hierarchy. (A union is the result of combining two tables into one table that has the rows that were present in either of the original tables.) Each row has nulls in the columns that do not apply to the type of the instance represented by the row. The single-table mapping strategy is the simplest representation of inheritance and provides good performance characteristics for many different categories of queries. To implement this mapping in LINQ to SQL, you must specify the attributes and attribute properties on the root class of the inheritance hierarchy. For more information, see How to: Map Inheritance Hierarchies. Developers using Visual Studio can also use the Object Relational Designer to map inheritance hierarchies. See alsoLocal Method CallsA local method call is one that is executed within the object model. A remote method call is one that LINQ to SQL translates to SQL and transmits to the database engine for execution. Local method calls are needed when LINQ to SQL cannot translate the call into SQL. Otherwise, an InvalidOperationException is thrown. Example 1In the following example, an Order class is mapped to the Orders table in the Northwind sample database. A local instance method has been added to the class. In Query 1, the constructor for the Order class is executed locally. In Query 2, if LINQ to SQL tried to translate LocalInstanceMethod()into SQL, the attempt would fail and an InvalidOperationException exception would be thrown. But because LINQ to SQL provides support for local method calls, Query2 will not throw an exception. C#// Query 1. var q1 = from ord in db.Orders where ord.EmployeeID == 9 select ord; foreach (var ordObj in q1) { Console.WriteLine("{0}, {1}", ordObj.OrderID, ordObj.ShipVia.Value); }C# // Query 2. public int LocalInstanceMethod(int x) { return x + 1; } void q2() { var q2 = from ord in db.Orders where ord.EmployeeID == 9 select new { member0 = ord.OrderID, member1 = ord.LocalInstanceMethod(ord.ShipVia.Value) }; } See alsoN-Tier and Remote Applications with LINQ to SQLYou can create n-tier or multitier applications that use LINQ to SQL. Typically, the LINQ to SQL data context, entity classes, and query construction logic are located on the middle tier as the data access layer (DAL). Business logic and any non-persistent data can be implemented completely in partial classes and methods of entities and the data context, or it can be implemented in separate classes. The client or presentation layer calls methods on the middle-tier's remote interface, and the DAL on that tier will execute queries or stored procedures that are mapped to DataContext methods. The middle tier returns the data to clients typically as XML representations of entities or proxy objects. On the middle tier, entities are created by the data context, which tracks their state, and manages deferred loading from and submission of changes to the database. These entities are "attached" to the DataContext. However, after the entities are sent to another tier through serialization, they become detached, which means the DataContext is no longer tracking their state. Entities that the client sends back for updates must be reattached to the data context before LINQ to SQL can submit the changes to the database. The client is responsible for providing original values and/or timestamps back to the middle tier if those are required for optimistic concurrency checks. In ASP.NET applications, the LinqDataSource manages most of this complexity. For more information, see LinqDataSource Web Server Control Overview. Additional ResourcesFor more information about how to implement n-tier applications that use LINQ to SQL, see the following topics: For more information about n-tier applications that use ADO.NET DataSets, see Work with datasets in n-tier applications. See alsoLINQ to SQL N-Tier with ASP.NETIn ASP.NET applications that use LINQ to SQL, you use the LinqDataSource Web server control. The control handles most of the logic that it must have to query against LINQ to SQL, pass the data to the browser, retrieve it, and submit it to the LINQ to SQL DataContext which then updates the database. You just configure the control in the markup, and the control handles all the data transfer between LINQ to SQL and the browser. Because the control handles the interactions with the presentation tier, and LINQ to SQL handles the communication with the data tier, your main focus in ASP.NET multitier applications is on writing your custom business logic. For more information about LINQDataSource, see LinqDataSource Web Server Control Overview. See alsoLINQ to SQL N-Tier with Web ServicesLINQ to SQL is designed especially for use on the middle tier in a loosely-coupled data access layer (DAL) such as a Web service. If the presentation tier is an ASP.NET Web page, then you use the LinqDataSource Web server control to manage the data transfer between the user interface and LINQ to SQL on the middle-tier. If the presentation tier is not an ASP.NET page, then both the middle-tier and the presentation tier must do some additional work to manage the serialization and deserialization of data. Setting up LINQ to SQL on the Middle TierIn a Web service or n-tier application, the middle tier contains the data context and the entity classes. You can create these classes manually, or by using either SQLMetal.exe or the Object Relational Designer as described elsewhere in the documentation. At design time, you have the option to make the entity classes serializable. For more information, see How to: Make Entities Serializable. Another option is to create a separate set of classes that encapsulate the data to be serialized, and then project into those serializable types when you return data in your LINQ queries. You then define the interface with the methods that the clients will call to retrieve, insert and update data. The interface methods wrap your LINQ queries. You can use any kind of serialization mechanism to handle the remote method calls and the serialization of data. The only requirement is that if you have cyclic or bi-directional relationships in your object model, such as that between Customers and Orders in the standard Northwind object model, then you must use a serializer that supports it. The Windows Communication Foundation (WCF) DataContractSerializer supports bi-directional relationships but the XmlSerializer that is used with non-WCF Web services does not. If you select to use the XmlSerializer, then you must make sure that your object model has no cyclic relationships. For more information about Windows Communication Foundation, see Windows Communication Foundation Services and WCF Data Services in Visual Studio. Implement your business rules or other domain-specific logic by using the partial classes and methods on the DataContext and entity classes to hook into LINQ to SQL runtime events. For more information, see Implementing N-Tier Business Logic. Defining the Serializable TypesThe client or presentation tier must have type definitions for the classes that it will be receiving from the middle tier. Those types may be the entity classes themselves, or special classes that wrap only certain fields from the entity classes for remoting. In any case, LINQ to SQL is completely unconcerned about how the presentation tier acquires those type definitions. For example, the presentation tier could use WCF to generate the types automatically, or it could have a copy of a DLL in which those types are defined, or it could just define its own versions of the types. Retrieving and Inserting DataThe middle tier defines an interface that specifies how the presentation tier accesses the data. For example GetProductByID(int productID), or GetCustomers(). On the middle tier, the method body typically creates a new instance of the DataContext, executes a query against one or more of its table. The middle tier then returns the result as an IEnumerable<T>, where T is either an entity class or another type that is used for serialization. The presentation tier never sends or receives query variables directly to or from the middle tier. The two tiers exchange values, objects, and collections of concrete data. After it has received a collection, the presentation tier can use LINQ to Objects to query it if necessary. When inserting data, the presentation tier can construct a new object and send it to the middle tier, or it can have the middle tier construct the object based on values that it provides. In general, retrieving and inserting data in n-tier applications does not differ much from the process in 2-tier applications. For more information, see Querying the Database and Making and Submitting Data Changes. Tracking Changes for Updates and DeletesLINQ to SQL supports optimistic concurrency based on timestamps (also named RowVersions) and on original values. If the database tables have timestamps, then updates and deletions require little extra work on either the middle-tier or presentation tier. However, if you must use original values for optimistic concurrency checks, then the presentation tier is responsible for tracking those values and sending them back when it makes updates. This is because changes that were made to entities on the presentation tier are not tracked on the middle tier. In fact, the original retrieval of an entity, and the eventual update made to it are typically performed by two completely separate instances of the DataContext. The greater the number of changes that the presentation tier makes, the more complex it becomes to track those changes and package them back to the middle tier. The implementation of a mechanism for communicating changes is completely up to the application. The only requirement is that LINQ to SQL must be given those original values that are required for optimistic concurrency checks. For more information, see Data Retrieval and CUD Operations in N-Tier Applications (LINQ to SQL). See alsoImplementing Business Logic (LINQ to SQL)The term "business logic" in this topic refers to any custom rules or validation tests that you apply to data before it is inserted, updated or deleted from the database. Business logic is also sometimes referred to as "business rules" or "domain logic." In n-tier applications it is typically designed as a logical layer so that it can be modified independently of the presentation layer or data access layer. The business logic can be invoked by the data access layer before or after any update, insertion, or deletion of data in the database. The business logic can be as simple as a schema validation to make sure that the type of the field is compatible with the type of the table column. Or it can consist of a set of objects that interact in arbitrarily complex ways. The rules may be implemented as stored procedures on the database or as in-memory objects. However the business logic is implemented, LINQ to SQL enables you use partial classes and partial methods to separate the business logic from the data access code. How LINQ to SQL Invokes Your Business LogicWhen you generate an entity class at design time, either manually or by using the Object Relational Designer or SQLMetal, it is defined as a partial class. This means that, in a separate code file, you can define another part of the entity class that contains your custom business logic. At compile time, the two parts are merged into a single class. But if you have to regenerate your entity classes by using the Object Relational Designer or SQLMetal, you can do so and your part of the class will not be modified. The partial classes that define entities and the DataContext contain partial methods. These are the extensibility points that you can use to apply your business logic before and after any update, insert, or delete for an entity or entity property. Partial methods can be thought of as compile-time events. The code-generator defines a method signature and calls the methods in the get and set property accessors, the DataContext constructor, and in some cases behind the scenes when SubmitChanges is called. However, if you do not implement a particular partial method, then all the references to it and the definition are removed at compile time. In the implementing definition that you write in your separate code file, you can perform whatever custom logic is required. You can use your partial class itself as your domain layer, or you can call from your implementing definition of the partial method into a separate object or objects. Either way, your business logic is cleanly separated from both your data access code and your presentation layer code. A Closer Look at the Extensibility PointsThe following example shows part of the code generated by the Object Relational Designer for the DataContext class that has two tables: Customers and Orders. Note that Insert, Update, and Delete methods are defined for each table in the class. C#public partial class MyNorthWindDataContext : System.Data.Linq.DataContext { private static System.Data.Linq.Mapping.MappingSource mappingSource = new AttributeMappingSource(); #region Extensibility Method Definitions partial void OnCreated(); partial void InsertCustomer(Customer instance); partial void UpdateCustomer(Customer instance); partial void DeleteCustomer(Customer instance); partial void InsertOrder(Order instance); partial void UpdateOrder(Order instance); partial void DeleteOrder(Order instance); #endregion If you implement the Insert, Update and Delete methods in your partial class, the LINQ to SQL runtime will call them instead of its own default methods when SubmitChanges is called. This enables you to override the default behavior for create / read / update / delete operations. For more information, see Walkthrough: Customizing the insert, update, and delete behavior of entity classes. The OnCreated method is called in the class constructor. C#public MyNorthWindDataContext(string connection) : base(connection, mappingSource) { OnCreated(); } The entity classes have three methods that are called by the LINQ to SQL runtime when the entity is created, loaded, and validated (when SubmitChanges is called). The entity classes also have two partial methods for each property, one that is called before the property is set, and one that is called after. The following code example shows some of the methods generated for the Customer class: C##region Extensibility Method Definitions partial void OnLoaded(); partial void OnValidate(); partial void OnCreated(); partial void OnCustomerIDChanging(string value); partial void OnCustomerIDChanged(); partial void OnCompanyNameChanging(string value); partial void OnCompanyNameChanged(); // ...additional Changing/Changed methods for each property The methods are called in the property set accessor as shown in the following example for the CustomerID property: C#public string CustomerID { set { if ((this._CustomerID != value)) { this.OnCustomerIDChanging(value); this.SendPropertyChanging(); this._CustomerID = value; this.SendPropertyChanged("CustomerID"); this.OnCustomerIDChanged(); } } } In your part of the class, you write an implementing definition of the method. In Visual Studio, after you type partial you will see IntelliSense for the method definitions in the other part of the class. C#partial class Customer { partial void OnCustomerIDChanging(string value) { //Perform custom validation logic here. } } For more information about how to add business logic to your application by using partial methods, see the following topics: How to: Add validation to entity classes Walkthrough: Customizing the insert, update, and delete behavior of entity classes Walkthrough: Adding Validation to Entity Classes See also
Data Retrieval and CUD Operations in N-Tier Applications (LINQ to SQL)When you serialize entity objects such as Customers or Orders to a client over a network, those entities are detached from their data context. The data context no longer tracks their changes or their associations with other objects. This is not an issue as long as the clients are only reading the data. It is also relatively simple to enable clients to add new rows to a database. However, if your application requires that clients be able to update or delete data, then you must attach the entities to a new data context before you call DataContext.SubmitChanges. In addition, if you are using an optimistic concurrency check with original values, then you will also need a way to provide the database both the original entity and the entity as modified. The Attach methods are provided to enable you to put entities into a new data context after they have been detached. Even if you are serializing proxy objects in place of the LINQ to SQL entities, you still have to construct an entity on the data access layer (DAL), and attach it to a new System.Data.Linq.DataContext, in order to submit the data to the database. LINQ to SQL is completely indifferent about how entities are serialized. For more information about how to use the Object Relational Designer and SQLMetal tools to generate classes that are serializable by using Windows Communication Foundation (WCF), see How to: Make Entities Serializable. Note Only call the Attach methods on new or deserialized entities. The only way for an entity to be detached from its original data context is for it to be serialized. If you try to attach an undetached entity to a new data context, and that entity still has deferred loaders from its previous data context, LINQ to SQL will thrown an exception. An entity with deferred loaders from two different data contexts could cause unwanted results when you perform insert, update, and delete operations on that entity. For more information about deferred loaders, see Deferred versus Immediate Loading. Retrieving DataClient Method CallThe following examples show a sample method call to the DAL from a Windows Forms client. In this example, the DAL is implemented as a Windows Service Library: C#private void GetProdsByCat_Click(object sender, EventArgs e) { // Create the WCF client proxy. NorthwindServiceReference.Service1Client proxy = new NorthwindClient.NorthwindServiceReference.Service1Client(); // Call the method on the service. NorthwindServiceReference.Product[] products = proxy.GetProductsByCategory(1); // If the database uses original values for concurrency checks, // the client needs to store them and pass them back to the // middle tier along with the new values when updating data. foreach (var v in products) { // Persist to a list<Product> declared at class scope. // Additional change-tracking logic is the responsibility // of the presentation tier and/or middle tier. originalProducts.Add(v); } // (Not shown) Bind the products list to a control // and/or perform whatever processing is necessary. } Middle Tier ImplementationThe following example shows an implementation of the interface method on the middle tier. The following are the two main points to note:
public IEnumerable<Product> GetProductsByCategory(int categoryID) { NorthwindClasses1DataContext db = new NorthwindClasses1DataContext(connectionString); IEnumerable<Product> productQuery = from prod in db.Products where prod.CategoryID == categoryID select prod; return productQuery.AsEnumerable(); } An instance of a data context should have a lifetime of one "unit of work." In a loosely-coupled environment, a unit of work is typically small, perhaps one optimistic transaction, including a single call to SubmitChanges. Therefore, the data context is created and disposed at method scope. If the unit of work includes calls to business rules logic, then generally you will want to keep the DataContext instance for that whole operation. In any case, DataContext instances are not intended to be kept alive for long periods of time across arbitrary numbers of transactions. This method will return Product objects but not the collection of Order_Detail objects that are associated with each Product. Use the DataLoadOptions object to change this default behavior. For more information, see How to: Control How Much Related Data Is Retrieved. Inserting DataTo insert a new object, the presentation tier just calls the relevant method on the middle tier interface, and passes in the new object to insert. In some cases, it may be more efficient for the client to pass in only some values and have the middle tier construct the full object. Middle Tier ImplementationOn the middle tier, a new DataContext is created, the object is attached to the DataContext by using the InsertOnSubmit method, and the object is inserted when SubmitChanges is called. Exceptions, callbacks, and error conditions can be handled just as in any other Web service scenario. C#// No call to Attach is necessary for inserts. public void InsertOrder(Order o) { NorthwindClasses1DataContext db = new NorthwindClasses1DataContext(connectionString); db.Orders.InsertOnSubmit(o); // Exception handling not shown. db.SubmitChanges(); } Deleting DataTo delete an existing object from the database, the presentation tier calls the relevant method on the middle tier interface, and passes in its copy that includes original values of the object to be deleted. Delete operations involve optimistic concurrency checks, and the object to be deleted must first be attached to the new data context. In this example, the Boolean parameter is set to false to indicate that the object does not have a timestamp (RowVersion). If your database table does generate timestamps for each record, then concurrency checks are much simpler, especially for the client. Just pass in either the original or modified object and set the Boolean parameter to true. In any case, on the middle tier it is typically necessary to catch the ChangeConflictException. For more information about how to handle optimistic concurrency conflicts, see Optimistic Concurrency: Overview. When deleting entities that have foreign key constraints on associated tables, you must first delete all the objects in its EntitySet<TEntity> collections. C#// Attach is necessary for deletes. public void DeleteOrder(Order order) { NorthwindClasses1DataContext db = new NorthwindClasses1DataContext(connectionString); db.Orders.Attach(order, false); // This will throw an exception if the order has order details. db.Orders.DeleteOnSubmit(order); try { // ConflictMode is an optional parameter. db.SubmitChanges(ConflictMode.ContinueOnConflict); } catch (ChangeConflictException e) { // Get conflict information, and take actions // that are appropriate for your application. // See MSDN Article How to: Manage Change Conflicts (LINQ to SQL). } } Updating DataLINQ to SQL supports updates in these scenarios involving optimistic concurrency:
You can also perform updates or deletes on an entity together with its relations, for example a Customer and a collection of its associated Order objects. When you make modifications on the client to a graph of entity objects and their child (EntitySet) collections, and the optimistic concurrency checks require original values, the client must provide those original values for each entity and EntitySet<TEntity> object. If you want to enable clients to make a set of related updates, deletes, and insertions in a single method call, you must provide the client a way to indicate what type of operation to perform on each entity. On the middle tier, you then must call the appropriate Attach method and then InsertOnSubmit, DeleteAllOnSubmit, or InsertOnSubmit (without Attach, for insertions) for each entity before you call SubmitChanges. Do not retrieve data from the database as a way to obtain original values before you try updates. For more information about optimistic concurrency, see Optimistic Concurrency: Overview. For detailed information about resolving optimistic concurrency change conflicts, see How to: Manage Change Conflicts. The following examples demonstrate each scenario: Optimistic concurrency with timestampsC#// Assume that "customer" has been sent by client. // Attach with "true" to say this is a modified entity // and it can be checked for optimistic concurrency because // it has a column that is marked with "RowVersion" attribute db.Customers.Attach(customer, true) try { // Optional: Specify a ConflictMode value // in call to SubmitChanges. db.SubmitChanges(); } catch(ChangeConflictException e) { // Handle conflict based on options provided // See MSDN article How to: Manage Change Conflicts (LINQ to SQL). } With Subset of Original ValuesIn this approach, the client returns the complete serialized object, together with the values to be modified. C#public void UpdateProductInventory(Product p, short? unitsInStock, short? unitsOnOrder) { using (NorthwindClasses1DataContext db = new NorthwindClasses1DataContext(connectionString)) { // p is the original unmodified product // that was obtained from the database. // The client kept a copy and returns it now. db.Products.Attach(p, false); // Now that the original values are in the data context, apply the changes. p.UnitsInStock = unitsInStock; p.UnitsOnOrder = unitsOnOrder; try { // Optional: Specify a ConflictMode value // in call to SubmitChanges. db.SubmitChanges(); } catch (ChangeConflictException e) { // Handle conflict based on provided options. // See MSDN article How to: Manage Change Conflicts // (LINQ to SQL). } } } With Complete EntitiesC#public void UpdateProductInfo(Product newProd, Product originalProd) { using (NorthwindClasses1DataContext db = new NorthwindClasses1DataContext(connectionString)) { db.Products.Attach(newProd, originalProd); try { // Optional: Specify a ConflictMode value // in call to SubmitChanges. db.SubmitChanges(); } catch (ChangeConflictException e) { // Handle potential change conflict in whatever way // is appropriate for your application. // For more information, see the MSDN article // How to: Manage Change Conflicts (LINQ to SQL)/ } } } To update a collection, call AttachAll instead of Attach. Expected Entity MembersAs stated previously, only certain members of the entity object are required to be set before you call the Attach methods. Entity members that are required to be set must fulfill the following criteria:
If a table uses a timestamp or version number for an optimistic concurrency check, you must set those members before you call Attach. A member is dedicated for optimistic concurrency checking when the IsVersion property is set to true on that Column attribute. Any requested updates will be submitted only if the version number or timestamp values are the same on the database. A member is also used in the optimistic concurrency check as long as the member does not have UpdateCheck set to Never. The default value is Always if no other value is specified. If any one of these required members is missing, a ChangeConflictException is thrown during SubmitChanges ("Row not found or changed"). StateAfter an entity object is attached to the DataContext instance, the object is considered to be in the PossiblyModified state. There are three ways to force an attached object to be considered Modified.
For more information, see Object States and Change-Tracking. If an entity object already occurs in the ID Cache with the same identity as the object being attached, a DuplicateKeyException is thrown. When you attach with an IEnumerable set of objects, a DuplicateKeyException is thrown when an already existing key is present. Remaining objects are not attached. See alsoObject IdentityObjects in the runtime have unique identities. Two variables that refer to the same object actually refer to the same instance of the object. Because of this fact, changes that you make by way of a path through one variable are immediately visible through the other. Rows in a relational database table do not have unique identities. Because each row has a unique primary key, no two rows share the same key value. However, this fact constrains only the contents of the database table. In reality, data is most often brought out of the database and into a different tier, where an application works with it. This is the model that LINQ to SQL supports. When data is brought out of the database as rows, you have no expectation that two rows that represent the same data actually correspond to the same row instances. If you query for a specific customer two times, you get two rows of data. Each row contains the same information. With objects you expect something very different. You expect that if you ask the DataContext for the same information repeatedly, it will in fact give you the same object instance. You expect this behavior because objects have special meaning for your application and you expect them to behave like objects. You designed them as hierarchies or graphs. You expect to retrieve them as such and not to receive multitudes of replicated instances just because you asked for the same thing more than one time. In LINQ to SQL, the DataContext manages object identity. Whenever you retrieve a new row from the database, the row is logged in an identity table by its primary key, and a new object is created. Whenever you retrieve that same row, the original object instance is handed back to the application. In this manner the DataContext translates the concept of identity as seen by the database (that is, primary keys) into the concept of identity seen by the language (that is, instances). The application only sees the object in the state that it was first retrieved. The new data, if different, is discarded. For more information, see Retrieving Objects from the Identity Cache. LINQ to SQL uses this approach to manage the integrity of local objects in order to support optimistic updates. Because the only changes that occur after the object is at first created are those made by the application, the intent of the application is clear. If changes by an outside party have occurred in the interim, they are identified at the time SubmitChanges() is called. Note If the object requested by the query is easily identifiable as one already retrieved, no query is executed. The identity table acts as a cache of all previously retrieved objects. ExamplesObject Caching Example 1In this example, if you execute the same query two times, you receive a reference to the same object in memory every time. C#Customer cust1 = (from cust in db.Customers where cust.CustomerID == "BONAP" select cust).First(); Customer cust2 = (from cust in db.Customers where cust.CustomerID == "BONAP" select cust).First(); Object Caching Example 2In this example, if you execute different queries that return the same row from the database, you receive a reference to the same object in memory every time. C#Customer cust1 = (from cust in db.Customers where cust.CustomerID == "BONAP" select cust).First(); Customer cust2 = (from ord in db.Orders where ord.Customer.CustomerID == "BONAP" select ord).First().Customer; See alsoThe LINQ to SQL Object ModelIn LINQ to SQL, an object model expressed in the programming language of the developer is mapped to the data model of a relational database. Operations on the data are then conducted according to the object model. In this scenario, you do not issue database commands (for example, INSERT) to the database. Instead, you change values and execute methods within your object model. When you want to query the database or send it changes, LINQ to SQL translates your requests into the correct SQL commands and sends those commands to the database.
The most fundamental elements in the LINQ to SQL object model and their relationship to elements in the relational data model are summarized in the following table:
Note The following descriptions assume that you have a basic knowledge of the relational data model and rules. LINQ to SQL Entity Classes and Database TablesIn LINQ to SQL, a database table is represented by an entity class. An entity class is like any other class you might create except that you annotate the class by using special information that associates the class with a database table. You make this annotation by adding a custom attribute (TableAttribute) to your class declaration, as in the following example: ExampleC#[Table(Name = "Customers")] public class Customerzz { public string CustomerID; // ... public string City; } Only instances of classes declared as tables (that is, entity classes) can be saved to the database. For more information, see the Table Attribute section of Attribute-Based Mapping. LINQ to SQL Class Members and Database ColumnsIn addition to associating classes with tables, you designate fields or properties to represent database columns. For this purpose, LINQ to SQL defines the ColumnAttribute attribute, as in the following example: ExampleC#[Table(Name = "Customers")] public class Customer { [Column(IsPrimaryKey = true)] public string CustomerID; [Column] public string City; } Only fields and properties mapped to columns are persisted to or retrieved from the database. Those not declared as columns are considered as transient parts of your application logic. The ColumnAttribute attribute has a variety of properties that you can use to customize these members that represent columns (for example, designating a member as representing a primary key column). For more information, see the Column Attribute section of Attribute-Based Mapping. LINQ to SQL Associations and Database Foreign-key RelationshipsIn LINQ to SQL, you represent database associations (such as foreign-key to primary-key relationships) by applying the AssociationAttribute attribute. In the following segment of code, the Order class contains a Customer property that has an AssociationAttribute attribute. This property and its attribute provide the Order class with a relationship to the Customer class. The following code example shows the Customer property from the Order class. ExampleC#[Association(Name="FK_Orders_Customers", Storage="_Customer", ThisKey="CustomerID", IsForeignKey=true)] public Customer Customer { get { return this._Customer.Entity; } set { Customer previousValue = this._Customer.Entity; if (((previousValue != value) || (this._Customer.HasLoadedOrAssignedValue == false))) { this.SendPropertyChanging(); if ((previousValue != null)) { this._Customer.Entity = null; previousValue.Orders.Remove(this); } this._Customer.Entity = value; if ((value != null)) { value.Orders.Add(this); this._CustomerID = value.CustomerID; } else { this._CustomerID = default(string); } this.SendPropertyChanged("Customer"); } } } For more information, see the Association Attribute section of Attribute-Based Mapping. LINQ to SQL Methods and Database Stored ProceduresLINQ to SQL supports stored procedures and user-defined functions. In LINQ to SQL, you map these database-defined abstractions to client objects so that you can access them in a strongly typed manner from client code. The method signatures resemble as closely as possible the signatures of the procedures and functions defined in the database. You can use IntelliSense to discover these methods. A result set that is returned by a call to a mapped procedure is a strongly typed collection. LINQ to SQL maps stored procedures and functions to methods by using the FunctionAttribute and ParameterAttribute attributes. Methods representing stored procedures are distinguished from those representing user-defined functions by the IsComposable property. If this property is set to false (the default), the method represents a stored procedure. If it is set to true, the method represents a database function. Note If you are using Visual Studio, you can use the Object Relational Designer to create methods mapped to stored procedures and user-defined functions. ExampleC#// This is an example of a stored procedure in the Northwind // sample database. The IsComposable property defaults to false. [Function(Name="dbo.CustOrderHist")] public ISingleResult<CustOrderHistResult> CustOrderHist([Parameter(Name="CustomerID", DbType="NChar(5)")] string customerID) { IExecuteResult result = this.ExecuteMethodCall(this, ((MethodInfo)(MethodInfo.GetCurrentMethod())), customerID); return ((ISingleResult<CustOrderHistResult>)(result.ReturnValue)); } For more information, see the Function Attribute, Stored Procedure Attribute, and Parameter Attribute sections of Attribute-Based Mapping and Stored Procedures. See alsoObject States and Change-TrackingLINQ to SQL objects always participate in some state. For example, when LINQ to SQL creates a new object, the object is in Unchanged state. A new object that you yourself create is unknown to the DataContext and is in Untracked state. Following successful execution of SubmitChanges, all objects known to LINQ to SQL are in Unchanged state. (The single exception is represented by those that have been successfully deleted from the database, which are in Deleted state and unusable in that DataContext instance.) Object StatesThe following table lists the possible states for LINQ to SQL objects.
Inserting ObjectsYou can explicitly request Inserts by using InsertOnSubmit. Alternatively, LINQ to SQL can infer Inserts by finding objects connected to one of the known objects that must be updated. For example, if you add an Untracked object to an EntitySet<TEntity> or set an EntityRef<TEntity> to an Untracked object, you make the Untracked object reachable by way of tracked objects in the graph. While processing SubmitChanges, LINQ to SQL traverses the tracked objects and discovers any reachable persistent objects that are not tracked. Such objects are candidates for insertion into the database. For classes in an inheritance hierarchy, InsertOnSubmit(o) also sets the value of the member designated as the discriminator to match the type of the object o. In the case of a type matching the default discriminator value, this action causes the discriminator value to be overwritten with the default value. For more information, see Inheritance Support. Important An object added to a Table is not in the identity cache. The identity cache reflects only what is retrieved from the database. After a call to InsertOnSubmit, the added entity does not appear in queries against the database until SubmitChanges is successfully completed. Deleting ObjectsYou mark a tracked object o for deletion by calling DeleteOnSubmit(o) on the appropriate Table<TEntity>. LINQ to SQL considers the removal of an object from an EntitySet<TEntity> as an update operation, and the corresponding foreign key value is set to null. The target of the operation (o) is not deleted from its table. For example, cust.Orders.DeleteOnSubmit(ord) indicates an update where the relationship between cust and ord is severed by setting the foreign key ord.CustomerID to null. It does not cause the deletion of the row corresponding to ord. LINQ to SQL performs the following processing when an object is deleted (DeleteOnSubmit) from its table:
You can call DeleteOnSubmit only on an object tracked by the DataContext. For an Untracked object, you must call Attach before you call DeleteOnSubmit. Calling DeleteOnSubmit on an Untracked object throws an exception. Note Removing an object from a table tells LINQ to SQL to generate a corresponding SQL DELETE command at the time of SubmitChanges. This action does not remove the object from the cache or propagate the deletion to related objects. To reclaim the id of a deleted object, use a new DataContext instance. For cleanup of related objects, you can use the cascade delete feature of the database, or else manually delete the related objects. The related objects do not have to be deleted in any special order (unlike in the database). Updating ObjectsYou can detect Updates by observing notifications of changes. Notifications are provided through the PropertyChanging event in property setters. When LINQ to SQL is notified of the first change to an object, it creates a copy of the object and considers the object a candidate for generating an Update statement. For objects that do not implement INotifyPropertyChanging, LINQ to SQL maintains a copy of the values that objects had when they were first materialized. When you call SubmitChanges, LINQ to SQL compares the current and original values to decide whether the object has been changed. For updates to relationships, the reference from the child to the parent (that is, the reference corresponding to the foreign key) is considered the authority. The reference in the reverse direction (that is, from parent to child) is optional. Relationship classes (EntitySet<TEntity> and EntityRef<TEntity>) guarantee that the bidirectional references are consistent for one-to-many and one-to-one relationships. If the object model does not use EntitySet<TEntity> or EntityRef<TEntity>, and if the reverse reference is present, it is your responsibility to keep it consistent with the forward reference when the relationship is updated. If you update both the required reference and the corresponding foreign key, you must make sure that they agree. An InvalidOperationException exception is thrown if the two are not synchronized at the time that you call SubmitChanges. Although foreign key value changes are sufficient for affecting an update of the underlying row, you should change the reference to maintain connectivity of the object graph and bidirectional consistency of relationships. See alsoOptimistic Concurrency: OverviewLINQ to SQL supports optimistic concurrency control. The following table describes terms that apply to optimistic concurrency in LINQ to SQL documentation:
In the LINQ to SQL object model, an optimistic concurrency conflict occurs when both of the following conditions are true:
Resolution of this conflict includes discovering which members of the object are in conflict, and then deciding what you want to do about it. Note Only members mapped as Always or WhenChanged participate in optimistic concurrency checks. No check is performed for members marked Never. For more information, see UpdateCheck. ExampleFor example, in the following scenario, User1 starts to prepare an update by querying the database for a row. User1 receives a row with values of Alfreds, Maria, and Sales. User1 wants to change the value of the Manager column to Alfred and the value of the Department column to Marketing. Before User1 can submit those changes, User2 has submitted changes to the database. So now the value of the Assistant column has been changed to Mary and the value of the Department column to Service. When User1 now tries to submit changes, the submission fails and a ChangeConflictException exception is thrown. This result occurs because the database values for the Assistant column and the Department column are not those that were expected. Members representing the Assistant and Department columns are in conflict. The following table summarizes the situation.
You can resolve conflicts such as this in different ways. For more information, see How to: Manage Change Conflicts. Conflict Detection and Resolution ChecklistYou can detect and resolve conflicts at any level of detail. At one extreme, you can resolve all conflicts in one of three ways (see RefreshMode) without additional consideration. At the other extreme, you can designate a specific action for each type of conflict on every member in conflict.
LINQ to SQL Types That Support Conflict Discovery and ResolutionClasses and features to support the resolution of conflicts in optimistic concurrency in LINQ to SQL include the following: See alsoQuery ConceptsThis section describes key concepts for designing LINQ queries in LINQ to SQL. In This Section
LINQ to SQL Queries
Querying Across Relationships
Remote vs. Local Execution
Deferred versus Immediate Loading Related Sections
Programming Guide
Object Identity
Introduction to LINQ Queries (C#) LINQ to SQL QueriesYou define LINQ to SQL queries by using the same syntax as you would in LINQ. The only difference is that the objects referenced in your queries are mapped to elements in a database. For more information, see Introduction to LINQ Queries (C#). LINQ to SQL translates the queries you write into equivalent SQL queries and sends them to the server for processing. More specifically, your application uses the LINQ to SQL API to request query execution. The LINQ to SQL provider then transforms the query into SQL text and delegates execution to the ADO provider. The ADO provider returns query results as a DataReader. The LINQ to SQL provider translates the ADO results to an IQueryable collection of user objects. Note Most methods and operators on .NET Framework built-in types have direct translations to SQL. Those that LINQ cannot translate generate run-time exceptions. For more information, see SQL-CLR Type Mapping. The following table shows the similarities and differences between LINQ and LINQ to SQL query items.
See also
Querying Across RelationshipsReferences to other objects or collections of other objects in your class definitions directly correspond to foreign-key relationships in the database. You can use these relationships when you query by using dot notation to access the relationship properties and navigate from one object to another. These access operations translate to more complex joins or correlated subqueries in the equivalent SQL. For example, the following query navigates from orders to customers as a way to restrict the results to only those orders for customers located in London. C#Northwnd db = new Northwnd(@"northwnd.mdf"); IQueryable<Order> londonOrderQuery = from ord in db.Orders where ord.Customer.City == "London" select ord; If relationship properties did not exist you would have to write them manually as joins, just as you would do in a SQL query, as in the following code: C#Northwnd db = new Northwnd(@"northwnd.mdf"); IQueryable<Order> londonOrderQuery = from cust in db.Customers join ord in db.Orders on cust.CustomerID equals ord.CustomerID where cust.City == "London" select ord; You can use the relationship property to define this particular relationship one time. You can then use the more convenient dot syntax. But relationship properties exist more importantly because domain-specific object models are typically defined as hierarchies or graphs. The objects that you program against have references to other objects. It is only a happy coincidence that object-to-object relationships correspond to foreign-key-styled relationships in databases. Property access then provides a convenient way to write joins. With regard to this, relationship properties are more important on the results side of a query than as part of the query itself. After the query has retrieved data about a particular customer, the class definition indicates that customers have orders. In other words, you expect the Orders property of a particular customer to be a collection that is populated with all the orders from that customer. That is in fact the contract you declared by defining the classes in this manner. You expect to see the orders there even if the query did not request orders. You expect your object model to maintain an illusion that it is an in-memory extension of the database with related objects immediately available. Now that you have relationships, you can write queries by referring to the relationship properties defined in your classes. These relationship references correspond to foreign-key relationships in the database. Operations that use these relationships translate to more complex joins in the equivalent SQL. As long as you have defined a relationship (using the AssociationAttribute attribute), you do not have to code an explicit join in LINQ to SQL. To help maintain this illusion, LINQ to SQL implements a technique called deferred loading. For more information, see Deferred versus Immediate Loading. Consider the following SQL query to project a list of CustomerID-OrderID pairs: SELECT t0.CustomerID, t1.OrderID FROM Customers AS t0 INNER JOIN Orders AS t1 ON t0.CustomerID = t1.CustomerID WHERE (t0.City = @p0) To obtain the same results by using LINQ to SQL, you use the Orders property reference already existing in the Customer class. The Orders reference provides the necessary information to execute the query and project the CustomerID-OrderID pairs, as in the following code: C#Northwnd db = new Northwnd(@"northwnd.mdf"); var idQuery = from cust in db.Customers from ord in cust.Orders where cust.City == "London" select new { cust.CustomerID, ord.OrderID }; You can also do the reverse. That is, you can query Orders and use its Customer relationship reference to access information about the associated Customer object. The following code projects the same CustomerID-OrderID pairs as before, but this time by querying Orders instead of Customers. C#Northwnd db = new Northwnd(@"northwnd.mdf"); var idQuery = from ord in db.Orders where ord.Customer.City == "London" select new { ord.Customer.CustomerID, ord.OrderID }; See alsoRemote vs. Local ExecutionYou can decide to execute your queries either remotely (that is, the database engine executes the query against the database) or locally (LINQ to SQL executes the query against a local cache). Remote ExecutionConsider the following query: C#Northwnd db = new Northwnd(@"northwnd.mdf"); Customer c = db.Customers.Single(x => x.CustomerID == "19283"); foreach (Order ord in c.Orders.Where(o => o.ShippedDate.Value.Year == 1998)) { // Do something. } If your database has thousands of rows of orders, you do not want to retrieve them all to process a small subset. In LINQ to SQL, the EntitySet<TEntity> class implements the IQueryable interface. This approach makes sure that such queries can be executed remotely. Two major benefits flow from this technique:
Local ExecutionIn other situations, you might want to have the complete set of related entities in the local cache. For this purpose, EntitySet<TEntity> provides the Load method to explicitly load all the members of the EntitySet<TEntity>. If an EntitySet<TEntity> is already loaded, subsequent queries are executed locally. This approach helps in two ways:
The following code fragment illustrates how local execution can be obtained: C#Northwnd db = new Northwnd(@"northwnd.mdf"); Customer c = db.Customers.Single(x => x.CustomerID == "19283"); c.Orders.Load(); foreach (Order ord in c.Orders.Where(o => o.ShippedDate.Value.Year == 1998)) { // Do something. } } ComparisonThese two capabilities provide a powerful combination of options: remote execution for large collections and local execution for small collections or where the complete collection is needed. You implement remote execution through IQueryable, and local execution against an in-memory IEnumerable<T> collection. To force local execution (that is, IEnumerable<T>), see Convert a Type to a Generic IEnumerable. Queries Against Unordered SetsNote the important difference between a local collection that implements List<T> and a collection that provides remote queries executed against unordered sets in a relational database. List<T> methods such as those that use index values require list semantics, which typically cannot be obtained through a remote query against an unordered set. For this reason, such methods implicitly load the EntitySet<TEntity> to allow local execution. See alsoDeferred versus Immediate LoadingWhen you query for an object, you actually retrieve only the object you requested. The related objects are not automatically fetched at the same time. (For more information, see Querying Across Relationships.) You cannot see the fact that the related objects are not already loaded, because an attempt to access them produces a request that retrieves them. For example, you might want to query for a particular set of orders and then only occasionally send an email notification to particular customers. You would not necessarily need initially to retrieve all customer data with every order. You can use deferred loading to defer retrieval of extra information until you absolutely have to. Consider the following example: C#Northwnd db = new Northwnd(@"northwnd.mdf"); IQueryable<Order> notificationQuery = from ord in db.Orders where ord.ShipVia == 3 select ord; foreach (Order ordObj in notificationQuery) { if (ordObj.Freight > 200) SendCustomerNotification(ordObj.Customer); ProcessOrder(ordObj); } } The opposite might also be true. You might have an application that has to view customer and order data at the same time. You know you need both sets of data. You know your application needs order information for each customer as soon as you get the results. You would not want to submit individual queries for orders for every customer. What you really want is to retrieve the order data together with the customers. C#Northwnd db = new Northwnd(@"c:\northwnd.mdf"); db.DeferredLoadingEnabled = false; IQueryable<Customer> custQuery = from cust in db.Customers where cust.City == "London" select cust; foreach (Customer custObj in custQuery) { foreach (Order ordObj in custObj.Orders) { ProcessCustomerOrder(ordObj); } } You can also join customers and orders in a query by forming the cross-product and retrieving all the relative bits of data as one large projection. But these results are not entities. (For more information, see The LINQ to SQL Object Model). Entities are objects that have identity and that you can modify, whereas these results would be projections that cannot be changed and persisted. Even worse, you would be retrieving lots of redundant data as each customer repeats for each order in the flattened join output. What you really need is a way to retrieve a set of related objects at the same time. The set is a delineated section of a graph so that you would never be retrieving more or less than was necessary for your intended use. For this purpose, LINQ to SQL provides DataLoadOptions for immediate loading of a region of your object model. Methods include:
See alsoRetrieving Objects from the Identity CacheThis topic describes the types of LINQ to SQL queries that return an object from the identity cache that is managed by the DataContext. In LINQ to SQL, one of the ways in which the DataContext manages objects is by logging object identities in an identity cache as queries are executed. In some cases, LINQ to SQL will attempt to retrieve an object from the identity cache before executing a query in the database. In general, for a LINQ to SQL query to return an object from the identity cache, the query must be based on the primary key of an object and must return a single object. In particular, the query must be in one of the general forms shown below. Note Pre-compiled queries will not return objects from the identity cache. For more information about pre-compiled queries, see CompiledQuery and How to: Store and Reuse Queries. A query must be in one of the following general forms to retrieve an object from the identity cache:
In these general forms, Function1, Function2, and predicate are defined as follows. Function1 can be any of the following: Function2 can be any of the following: predicate must be an expression in which the object's primary key property is set to a constant value. If an object has a primary key defined by more than one property, each primary key property must be set to a constant value. The following are examples of the form predicate must take:
ExampleThe following code provides examples of the types of LINQ to SQL queries that retrieve an object from the identity cache. C#NorthwindDataContext context = new NorthwindDataContext(); // This query does not retrieve an object from // the query cache because it is the first query. // There are no objects in the cache. var a = context.Customers.First(); Console.WriteLine("First query gets customer {0}. ", a.CustomerID); // This query returns an object from the query cache. var b = context.Customers.Where(c => c.CustomerID == a.CustomerID); foreach (var customer in b ) { Console.WriteLine(customer.CustomerID); } // This query returns an object from the identity cache. // Note that calling FirstOrDefault(), Single(), or SingleOrDefault() // instead of First() will also return an object from the cache. var x = context.Customers. Where(c => c.CustomerID == a.CustomerID). First(); Console.WriteLine(x.CustomerID); // This query returns an object from the identity cache. // Note that calling FirstOrDefault(), Single(), or SingleOrDefault() // instead of First() (each with the same predicate) will also // return an object from the cache. var y = context.Customers.First(c => c.CustomerID == a.CustomerID); Console.WriteLine(y.CustomerID); See alsoSecurity in LINQ to SQLSecurity risks are always present when you connect to a database. Although LINQ to SQL may include some new ways to work with data in SQL Server, it does not provide any additional security mechanisms. Access Control and AuthenticationLINQ to SQL does not have its own user model or authentication mechanisms. Use SQL Server Security to control access to the database, database tables, views, and stored procedures that are mapped to your object model. Grant the minimally required access to users and require strong passwords for user authentication. Mapping and Schema InformationSQL-CLR type mapping and database schema information in your object model or external mapping file is available for all with access to those files in the file system. Assume that schema information will be available to all who can access the object model or external mapping file. To prevent more widespread access to schema information, use file security mechanisms to secure source files and mapping files. Connection StringsUsing passwords in connection strings should be avoided whenever possible. Not only is a connection string a security risk in its own right, but the connection string may also be added in clear text to the object model or external mapping file when using the Object Relational Designer or SQLMetal command-line tool. Anyone with access to the object model or external mapping file via the file system could see the connection password (if it is included in the connection string). To minimize such risks, use integrated security to make a trusted connection with SQL Server. By using this approach, you do not have to store a password in the connection string. For more information, see SQL Server Security. In the absence of integrated security, a clear-text password will be needed in the connection string. The best way to help secure your connection string, in increasing order of risk, is as follows:
See alsoSerializationThis topic describes LINQ to SQL serialization capabilities. The paragraphs that follow provide information about how to add serialization during code generation at design time and the run-time serialization behavior of LINQ to SQL classes. You can add serialization code at design time by either of the following methods:
OverviewThe code generated by LINQ to SQL provides deferred loading capabilities by default. Deferred loading is very convenient on the mid-tier for transparent loading of data on demand. However, it is problematic for serialization, because the serializer triggers deferred loading whether deferred loading is intended or not. In effect, when an object is serialized, its transitive closure under all outbound defer-loaded references is serialized. The LINQ to SQL serialization feature addresses this problem, primarily through two mechanisms:
Definitions
Code ExampleThe following code uses the traditional Customer and Order classes from the Northwind sample database, and shows how these classes are decorated with serialization attributes. C#// The class is decorated with the DataContract attribute. [Table(Name="dbo.Customers")] [DataContract()] public partial class Customer : INotifyPropertyChanging, INotifyPropertyChanged {C# // Private fields are not decorated with any attributes, and are // elided. private string _CustomerID; // Public properties are decorated with the DataMember // attribute and the Order property specifying the serial // number. See the Order class later in this topic for // exceptions. public Customer() { this.Initialize(); } [Column(Storage="_CustomerID", DbType="NChar(5) NOT NULL", CanBeNull=false, IsPrimaryKey=true)] [DataMember(Order=1)] public string CustomerID { get { return this._CustomerID; } set { if ((this._CustomerID != value)) { this.OnCustomerIDChanging(value); this.SendPropertyChanging(); this._CustomerID = value; this.SendPropertyChanged("CustomerID"); this.OnCustomerIDChanged(); } } }C# // The following Association property is decorated with // DataMember because it is the parent side of the // relationship. The reverse property in the Order class // does not have a DataMember attribute. This factor // prevents a 'cycle.' [Association(Name="FK_Orders_Customers", Storage="_Orders", OtherKey="CustomerID", DeleteRule="NO ACTION")] [DataMember(Order=13)] public EntitySet<Order> Orders { get { return this._Orders; } set { this._Orders.Assign(value); } } For the Order class in the following example, only the reverse association property corresponding to the Customer class is shown for brevity. It does not have a DataMemberAttribute attribute to avoid a cycle. C#// The class for the Orders table is also decorated with the // DataContract attribute. [Table(Name="dbo.Orders")] [DataContract()] public partial class Order : INotifyPropertyChanging, INotifyPropertyChangedC# // Private fields for the Orders table are not decorated with // any attributes, and are elided. private int _OrderID; // Public properties are decorated with the DataMember // attribute. // The reverse Association property on the side of the // foreign key does not have the DataMember attribute. [Association(Name = "FK_Orders_Customers", Storage = "_Customer", ThisKey = "CustomerID", IsForeignKey = true)] public Customer Customer How to Serialize the EntitiesYou can serialize the entities in the codes shown in the previous section as follows; C#Northwnd db = new Northwnd(@"c\northwnd.mdf"); Customer cust = db.Customers.Where(c => c.CustomerID == "ALFKI").Single(); DataContractSerializer dcs = new DataContractSerializer(typeof(Customer)); StringBuilder sb = new StringBuilder(); XmlWriter writer = XmlWriter.Create(sb); dcs.WriteObject(writer, cust); writer.Close(); string xml = sb.ToString(); Self-Recursive RelationshipsSelf-recursive relationships follow the same pattern. The association property corresponding to the foreign key does not have a DataMemberAttribute attribute, whereas the parent property does. Consider the following class that has two self-recursive relationships: Employee.Manager/Reports and Employee.Mentor/Mentees. C#// No DataMember attribute. public Employee Manager; [DataMember(Order = 3)] public EntitySet<Employee> Reports; // No DataMember public Employee Mentor; [DataMember(Order = 5)] public EntitySet<Employee> Mentees; See alsoStored ProceduresLINQ to SQL uses methods in your object model to represent stored procedures in the database. You designate methods as stored procedures by applying the FunctionAttribute attribute and, where required, the ParameterAttribute attribute. For more information, see The LINQ to SQL Object Model. Developers using Visual Studio would typically use the Object Relational Designer to map stored procedures. The topics in this section show how to form and call these methods in your application if you write the code yourself. In This Section
How to: Return Rowsets
How to: Use Stored Procedures that Take Parameters
How to: Use Stored Procedures Mapped for Multiple Result Shapes
How to: Use Stored Procedures Mapped for Sequential Result Shapes
Customizing Operations By Using Stored Procedures
Customizing Operations by Using Stored Procedures Exclusively Related Sections
Programming Guide
Walkthrough: Using Only Stored Procedures (Visual Basic)
Walkthrough: Using Only Stored Procedures (C#) How to: Return RowsetsThis example returns a rowset from the database, and includes an input parameter to filter the result. When you execute a stored procedure that returns a rowset, you use a result class that stores the returns from the stored procedure. For more information, see Analyzing LINQ to SQL Source Code. ExampleThe following example represents a stored procedure that returns rows of customers and uses an input parameter to return only those rows that list "London" as the customer city. The example assumes an enumerable CustomersByCityResult class. CREATE PROCEDURE [dbo].[Customers By City] (@param1 NVARCHAR(20)) AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; SELECT CustomerID, ContactName, CompanyName, City from Customers as c where c.City=@param1 ENDC# [Function(Name="dbo.Customers By City")] public ISingleResult<CustomersByCityResult> CustomersByCity([Parameter(DbType="NVarChar(20)")] string param1) { IExecuteResult result = this.ExecuteMethodCall(this, ((MethodInfo)(MethodInfo.GetCurrentMethod())), param1); return ((ISingleResult<CustomersByCityResult>)(result.ReturnValue)); } // Call the stored procedure. void ReturnRowset() { Northwnd db = new Northwnd(@"c:\northwnd.mdf"); ISingleResult<CustomersByCityResult> result = db.CustomersByCity("London"); foreach (CustomersByCityResult cust in result) { Console.WriteLine("CustID={0}; City={1}", cust.CustomerID, cust.City); } } See alsoHow to: Use Stored Procedures that Take ParametersLINQ to SQL maps output parameters to reference parameters, and for value types declares the parameter as nullable. For an example of how to use an input parameter in a query that returns a rowset, see How to: Return Rowsets. ExampleThe following example takes a single input parameter (the customer ID) and returns an out parameter (the total sales for that customer). CREATE PROCEDURE [dbo].[CustOrderTotal] @CustomerID nchar(5), @TotalSales money OUTPUT AS SELECT @TotalSales = SUM(OD.UNITPRICE*(1-OD.DISCOUNT) * OD.QUANTITY) FROM ORDERS O, "ORDER DETAILS" OD where O.CUSTOMERID = @CustomerID AND O.ORDERID = OD.ORDERIDC# [Function(Name="dbo.CustOrderTotal")] [return: Parameter(DbType="Int")] public int CustOrderTotal([Parameter(Name="CustomerID", DbType="NChar(5)")] string customerID, [Parameter(Name="TotalSales", DbType="Money")] ref System.Nullable<decimal> totalSales) { IExecuteResult result = this.ExecuteMethodCall(this, ((MethodInfo)(MethodInfo.GetCurrentMethod())), customerID, totalSales); totalSales = ((System.Nullable<decimal>)(result.GetParameterValue(1))); return ((int)(result.ReturnValue)); } ExampleYou would call this stored procedure as follows: C#Northwnd db = new Northwnd(@"c:\northwnd.mdf"); decimal? totalSales = 0; db.CustOrderTotal("alfki", ref totalSales); Console.WriteLine(totalSales); See alsoHow to: Use Stored Procedures Mapped for Multiple Result ShapesWhen a stored procedure can return multiple result shapes, the return type cannot be strongly typed to a single projection shape. Although LINQ to SQL can generate all possible projection types, it cannot know the order in which they will be returned. Contrast this scenario with stored procedures that produce multiple result shapes sequentially. For more information, see How to: Use Stored Procedures Mapped for Sequential Result Shapes. The ResultTypeAttribute attribute is applied to stored procedures that return multiple result types to specify the set of types the procedure can return. ExampleIn the following SQL code example, the result shape depends on the input (shape =1 or shape = 2). You do not know which projection will be returned first. CREATE PROCEDURE VariableResultShapes(@shape int) AS if(@shape = 1) select CustomerID, ContactTitle, CompanyName from customers else if(@shape = 2) select OrderID, ShipName from ordersC# [Function(Name="dbo.VariableResultShapes")] [ResultType(typeof(VariableResultShapesResult1))] [ResultType(typeof(VariableResultShapesResult2))] public IMultipleResults VariableResultShapes([Parameter(DbType="Int")] System.Nullable<int> shape) { IExecuteResult result = this.ExecuteMethodCall(this, ((MethodInfo)(MethodInfo.GetCurrentMethod())), shape); return ((IMultipleResults)(result.ReturnValue)); } ExampleYou would use code similar to the following to execute this stored procedure. Note You must use the GetResult pattern to obtain an enumerator of the correct type, based on your knowledge of the stored procedure. C#Northwnd db = new Northwnd(@"c:\northwnd.mdf"); // Assign the results of the procedure with an argument // of (1) to local variable 'result'. IMultipleResults result = db.VariableResultShapes(1); // Iterate through the list and write results (the company names) // to the console. foreach(VariableResultShapesResult1 compName in result.GetResult<VariableResultShapesResult1>()) { Console.WriteLine(compName.CompanyName); } // Pause to view company names; press Enter to continue. Console.ReadLine(); // Assign the results of the procedure with an argument // of (2) to local variable 'result'. IMultipleResults result2 = db.VariableResultShapes(2); // Iterate through the list and write results (the order IDs) // to the console. foreach (VariableResultShapesResult2 ord in result2.GetResult<VariableResultShapesResult2>()) { Console.WriteLine(ord.OrderID); } See alsoHow to: Use Stored Procedures Mapped for Sequential Result ShapesThis kind of stored procedure can generate more than one result shape, but you know in what order the results are returned. Contrast this scenario with the scenario where you do not know the sequence of the returns. For more information, see How to: Use Stored Procedures Mapped for Multiple Result Shapes. ExampleHere is the T-SQL of a stored procedure that returns multiple result shapes sequentially: CREATE PROCEDURE MultipleResultTypesSequentially AS select * from products select * from customersC# [Function(Name="dbo.MultipleResultTypesSequentially")] [ResultType(typeof(MultipleResultTypesSequentiallyResult1))] [ResultType(typeof(MultipleResultTypesSequentiallyResult2))] public IMultipleResults MultipleResultTypesSequentially() { IExecuteResult result = this.ExecuteMethodCall(this, ((MethodInfo)(MethodInfo.GetCurrentMethod()))); return ((IMultipleResults)(result.ReturnValue)); } ExampleYou would use code similar to the following to execute this stored procedure. C#Northwnd db = new Northwnd(@"c:\northwnd.mdf"); IMultipleResults sprocResults = db.MultipleResultTypesSequentially(); // First read products. foreach (Product prod in sprocResults.GetResult<Product>()) { Console.WriteLine(prod.ProductID); } // Next read customers. foreach (Customer cust in sprocResults.GetResult<Customer>()) { Console.WriteLine(cust.CustomerID); } See alsoCustomizing Operations By Using Stored ProceduresStored procedures represent a common approach to overriding default behavior. The examples in this topic show how you can use generated method wrappers for stored procedures, and how you can call stored procedures directly. If you are using Visual Studio, you can use the Object Relational Designer to assign stored procedures to perform inserts, updates, and deletes. Note To read back database-generated values, use output parameters in your stored procedures. If you cannot use output parameters, write a partial method implementation instead of relying on overrides generated by the Object Relational Designer. Members mapped to database-generated values must be set to appropriate values after INSERT or UPDATE operations have successfully completed. For more information, see Responsibilities of the Developer In Overriding Default Behavior. ExampleDescriptionIn the following example, assume that the Northwind class contains two methods to call stored procedures that are being used for overrides in a derived class. CodeC#[Function()] public IEnumerable<Order> CustomerOrders( [Parameter(Name = "CustomerID", DbType = "NChar(5)")] string customerID) { IExecuteResult result = this.ExecuteMethodCall(this, ((MethodInfo)(MethodInfo.GetCurrentMethod())), customerID); return ((IEnumerable<Order>)(result.ReturnValue)); } [Function()] public IEnumerable<Customer> CustomerById( [Parameter(Name = "CustomerID", DbType = "NChar(5)")] string customerID) { IExecuteResult result = this.ExecuteMethodCall(this, ((MethodInfo)(MethodInfo.GetCurrentMethod())), customerID); return (IEnumerable<Customer>)(result.ReturnValue); } ExampleDescriptionThe following class uses these methods for the override. CodeC#public class NorthwindThroughSprocs : Northwnd { public NorthwindThroughSprocs(string connection) : base(connection) { } // Override loading of Customer.Orders by using method wrapper. private IEnumerable<Order> LoadOrders(Customer customer) { return this.CustomerOrders(customer.CustomerID); } // Override loading of Order.Customer by using method wrapper. private Customer LoadCustomer(Order order) { return this.CustomerById(order.CustomerID).Single(); } // Override INSERT operation on Customer by calling the // stored procedure directly. private void InsertCustomer(Customer customer) { // Call the INSERT stored procedure directly. this.ExecuteCommand("exec sp_insert_customer …"); } // The UPDATE override works similarly, that is, by // calling the stored procedure directly. private void UpdateCustomer(Customer original, Customer current) { // Call the UPDATE stored procedure by using current // and original values. this.ExecuteCommand("exec sp_update_customer …"); } // The DELETE override works similarly. private void DeleteCustomer(Customer customer) { // Call the DELETE stored procedure directly. this.ExecuteCommand("exec sp_delete_customer …"); } } ExampleDescriptionYou can use NorthwindThroughSprocs exactly as you would use Northwnd. CodeC#NorthwindThroughSprocs db = new NorthwindThroughSprocs(""); var custQuery = from cust in db.Customers where cust.City == "London" select cust; foreach (Customer custObj in custQuery) // deferred loading of cust.Orders uses the override LoadOrders. foreach (Order ord in custObj.Orders) // ... // Make some changes to customers/orders. // Overrides for Customer are called during the execution of the // following: db.SubmitChanges(); See alsoCustomizing Operations by Using Stored Procedures ExclusivelyAccess to data by using only stored procedures is a common scenario. ExampleDescriptionYou can modify the example provided in Customizing Operations By Using Stored Procedures by replacing even the first query (which causes dynamic SQL execution) with a method call that wraps a stored procedure. Assume CustomersByCity is the method, as in the following example. CodeC#[Function()] public IEnumerable<Customer> CustomersByCity( [Parameter(Name = "City", DbType = "NVarChar(15)")] string city) { IExecuteResult result = this.ExecuteMethodCall(this, ((MethodInfo)(MethodInfo.GetCurrentMethod())), city); return ((IEnumerable<Customer>)(result.ReturnValue)); } The following code executes without any dynamic SQL. C#NorthwindThroughSprocs db = new NorthwindThroughSprocs("..."); // Use a method call (stored procedure wrapper) instead of // a LINQ query against the database. var custQuery = db.CustomersByCity("London"); foreach (Customer custObj in custQuery) { // Deferred loading of custObj.Orders uses the override // LoadOrders. There is no dynamic SQL. foreach (Order ord in custObj.Orders) { // Make some changes to customers/orders. // Overrides for Customer are called during the execution // of the following. } } db.SubmitChanges(); See alsoTransaction SupportLINQ to SQL supports three distinct transaction models. The following lists these models in the order of checks performed. Explicit Local TransactionWhen SubmitChanges is called, if the Transaction property is set to a (IDbTransaction) transaction, the SubmitChanges call is executed in the context of the same transaction. It is your responsibility to commit or rollback the transaction after successful execution of the transaction. The connection corresponding to the transaction must match the connection used for constructing the DataContext. An exception is thrown if a different connection is used. Explicit Distributable TransactionYou can call LINQ to SQL APIs (including but not limited to SubmitChanges) in the scope of an active Transaction. LINQ to SQL detects that the call is in the scope of a transaction and does not create a new transaction. LINQ to SQL also avoids closing the connection in this case. You can perform query and SubmitChanges executions in the context of such a transaction. Implicit TransactionWhen you call SubmitChanges, LINQ to SQL checks to see whether the call is in the scope of a Transaction or if the Transaction property (IDbTransaction) is set to a user-started local transaction. If it finds neither transaction, LINQ to SQL starts a local transaction (IDbTransaction) and uses it to execute the generated SQL commands. When all SQL commands have been successfully completed, LINQ to SQL commits the local transaction and returns. See alsoSQL-CLR Type MismatchesLINQ to SQL automates much of the translation between the object model and SQL Server. Nevertheless, some situations prevent exact translation. These key mismatches between the common language runtime (CLR) types and the SQL Server database types are summarized in the following sections. You can find more details about specific type mappings and function translation at SQL-CLR Type Mapping and Data Types and Functions. Data TypesTranslation between the CLR and SQL Server occurs when a query is being sent to the database, and when the results are sent back to your object model. For example, the following Transact-SQL query requires two value conversions: SQLSelect DateOfBirth From Customer Where CustomerId = @id Before the query can be executed on SQL Server, the value for the Transact-SQL parameter must be specified. In this example, the id parameter value must first be translated from a CLR System.Int32 type to a SQL Server INT type so that the database can understand what the value is. Then to retrieve the results, the SQL Server DateOfBirth column must be translated from a SQL Server DATETIME type to a CLR System.DateTime type for use in the object model. In this example, the types in the CLR object model and SQL Server database have natural mappings. But, this is not always the case. Missing CounterpartsThe following types do not have reasonable counterparts.
Multiple MappingsThere are many SQL Server data types that you can map to one or more CLR data types. There are also many CLR types that you can map to one or more SQL Server types. Although a mapping may be supported by LINQ to SQL, it does not mean that the two types mapped between the CLR and SQL Server are a perfect match in precision, range, and semantics. Some mappings may include differences in any or all of these dimensions. You can find details about these potential differences for the various mapping possibilities at SQL-CLR Type Mapping. User-defined TypesUser-defined CLR types are designed to help bridge the type system gap. Nevertheless they surface interesting issues about type versioning. A change in the version on the client might not be matched by a change in the type stored on the database server. Any such change causes another type mismatch where the type semantics might not match and the version gap is likely to become visible. Further complications occur as inheritance hierarchies are refactored in successive versions. Expression SemanticsIn addition to the pairwise mismatch between CLR and database types, expressions add complexity to the mismatch. Mismatches in operator semantics, function semantics, implicit type conversion, and precedence rules must be considered. The following subsections illustrate the mismatch between apparently similar expressions. It might be possible to generate SQL expressions that are semantically equivalent to a given CLR expression. However, it is not clear whether the semantic differences between apparently similar expressions are evident to a CLR user, and therefore whether the changes that are required for semantic equivalence are intended or not. This is an especially critical issue when an expression is evaluated for a set of values. The visibility of the difference might depend on data- and be hard to identify during coding and debugging. Null SemanticsSQL expressions provide three-valued logic for Boolean expressions. The result can be true, false, or null. By contrast, CLR specifies two-valued Boolean result for comparisons involving null values. Consider the following code: C#Nullable<int> i = null; Nullable<int> j = null; if (i == j) { // This branch is executed. }SQL -- Assume col1 and col2 are integer columns with null values. -- Assume that ANSI null behavior has not been explicitly -- turned off. Select … From … Where col1 = col2 -- Evaluates to null, not true and the corresponding row is not -- selected. -- To obtain matching behavior (i -> col1, j -> col2) change -- the query to the following: Select … From … Where col1 = col2 or (col1 is null and col2 is null) -- (Visual Basic 'Nothing'.) A similar problem occurs with the assumption about two-valued results. C#if ((i == j) || (i != j)) // Redundant condition. { // ... }SQL -- Assume col1 and col2 are nullable columns. -- Assume that ANSI null behavior has not been explicitly -- turned off. Select … From … Where col1 = col2 or col1 != col2 -- Visual Basic: col1 <> col2. -- Excludes the case where the boolean expression evaluates -- to null. Therefore the where clause does not always -- evaluate to true. In the previous case, you can get equivalent behavior in generating SQL, but the translation might not accurately reflect your intention. LINQ to SQL does not impose C# null or Visual Basic nothing comparison semantics on SQL. Comparison operators are syntactically translated to their SQL equivalents. The semantics reflect SQL semantics as defined by server or connection settings. Two null values are considered unequal under default SQL Server settings (although you can change the settings to change the semantics). Regardless, LINQ to SQL does not consider server settings in query translation. A comparison with the literal null (nothing) is translated to the appropriate SQL version (is null or is not null). The value of null (nothing) in collation is defined by SQL Server; LINQ to SQL does not change the collation. Type Conversion and PromotionSQL supports a rich set of implicit conversions in expressions. Similar expressions in C# would require an explicit cast. For example:
Likewise, type precedence in Transact-SQL differs from type precedence in C# because the underlying set of types is different. In fact, there is no clear subset/superset relationship between the precedence lists. For example, comparing an nvarchar with a varchar causes the implicit conversion of the varchar expression to nvarchar. The CLR provides no equivalent promotion. In simple cases, these differences cause CLR expressions with casts to be redundant for a corresponding SQL expression. More importantly, the intermediate results of a SQL expression might be implicitly promoted to a type that has no accurate counterpart in C#, and vice versa. Overall, the testing, debugging, and validation of such expressions adds significant burden on the user. CollationTransact-SQL supports explicit collations as annotations to character string types. These collations determine the validity of certain comparisons. For example, comparing two columns with different explicit collations is an error. The use of much simplified CTS string type does not cause such errors. Consider the following example: SQLcreate table T2 ( Col1 nvarchar(10), Col2 nvarchar(10) collate Latin_general_ci_as )C# class C { string s1; // Map to T2.Col1. string s2; // Map to T2.Col2. void Compare() { if (s1 == s2) // This is correct. { // ... } } }SQL Select … From … Where Col1 = Col2 -- Error, collation conflict. In effect, the collation subclause creates a restricted type that is not substitutable. Similarly, the sort order can be significantly different across the type systems. This difference affects the sorting of results. Guid is sorted on all 16 bytes by lexicographic order (IComparable()), whereas T-SQL compares GUIDs in the following order: node(10-15), clock-seq(8-9), time-high(6-7), time-mid(4-5), time-low(0-3). This ordering was done in SQL 7.0 when NT-generated GUIDs had such an octet order. The approach ensured that GUIDs generated at the same node cluster came together in sequential order according to timestamp. The approach was also useful for building indexes (inserts become appends instead of random IOs). The order was scrambled later in Windows because of privacy concerns, but SQL must maintain compatibility. A workaround is to use SqlGuid instead of Guid. Operator and Function DifferencesOperators and functions that are essentially comparable have subtly different semantics. For example:
// C# overflow in absence of explicit checks. int i = Int32.MaxValue; int j = 5; if (i+j < 0) Console.WriteLine("Overflow!"); // This code prints the overflow message.
// C# equivalent on collections of Strings in place of nvarchars. String[] strings = { "food", "FOOD" }; foreach (String s in strings) { if (s == "food") { Console.WriteLine(s); } } // Only "food" is returned.
create table T4 ( Col1 nchar(4) ) Insert into T5(Col1) values ('21'); Insert into T5(Col1) values ('1021'); Select * from T5 where Col1 like '%1' -- Only the second row with Col1 = '1021' is returned. -- Not the first row!C# // Assume Like(String, String) method. string s = ""; // map to T4.Col1 if (System.Data.Linq.SqlClient.SqlMethods.Like(s, "%1")) { Console.WriteLine(s); } // Expected to return true for both "21" and "1021" A similar problem occurs with string concatenation. SQL
In summary, a convoluted translation might be required for CLR expressions and additional operators/functions may be necessary to expose SQL functionality. Type CastingIn C# and in SQL, users can override the default semantics of expressions by using explicit type casts (Cast and Convert). However, exposing this capability across the type system boundary poses a dilemma. A SQL cast that provides the desired semantics cannot be easily translated to a corresponding C# cast. On the other hand, a C# cast cannot be directly translated into an equivalent SQL cast because of type mismatches, missing counterparts, and different type precedence hierarchies. There is a trade-off between exposing the type system mismatch and losing significant power of expression. In other cases, type casting might not be needed in either domain for validation of an expression but might be required to make sure that a non-default mapping is correctly applied to the expression. SQL-- Example from "Non-default Mapping" section extended create table T5 ( Col1 nvarchar(10), Col2 nvarchar(10) ) Insert into T5(col1, col2) values (‘3’, ‘2’);C# class C { int x; // Map to T5.Col1. int y; // Map to T5.Col2. void Casting() { // Intended predicate. if (x + y > 4) { // valid for the data above } } }SQL Select * From T5 Where Col1 + Col2 > 4 -- "Col1 + Col2" expr evaluates to '32' Performance IssuesAccounting for some SQL Server-CLR type differences may result in a decrease in performance when crossing between the CLR and SQL Server type systems. Examples of scenarios impacting performance include the following:
-- Table DDL create table T5 ( Col1 varchar(100) )C# class C5 { string s; // Map to T5.Col1. } Consider the translation of expression (s = SOME_STRING_CONSTANT). SQL
In addition to semantic differences, it is important to consider impacts to performance when crossing between the SQL Server and CLR type systems. For large data sets, such performance issues can determine whether an application is deployable. See alsoSQL-CLR Custom Type MappingsType mapping between SQL Server and the common language runtime (CLR) is automatically specified when you use the SQLMetal command-line tool, Object Relational Designer (O/R Designer). When no customized mapping is performed, these tools assign default type mappings as described in SQL-CLR Type Mapping. If you want to type mappings differently from these defaults, you need to do some customization of the type mappings. When customizing type mappings, the recommended approach is to make the changes in an intermediary DBML file. Then, your customized DBML file should be used when you create you code and mapping files with SQLMetal or O/R Designer. Once you instantiate the DataContext object from the code and mapping files, the DataContext.CreateDatabase method creates a database based on the type mappings that are specified. If there are no CLR type attributes specified in the mappings, the default type mappings will be used. Customization with SQLMetal or O/R DesignerWith SQLMetal and O/R Designer, you can automatically create an object model that includes the type mapping information inside or outside the code file. Because these files are overwritten by SQLMetal or O/R Designer, each time you recreate your mappings, the recommended approach to specifying custom type mappings is to customize a DBML file. To customize type mappings with SQLMetal or O/R Designer, first generate a DBML file. Then, before generating the code file or mapping file, modify the DBML file to identify the desired type mappings. With SQLMetal, you have to manually change the Type and DbType attributes in the DBML file to make your type mapping customizations. With O/R Designer, you can make your changes within the Designer. For more information about using the O/R Designer, see LINQ to SQL Tools in Visual Studio. Note Some type mappings may result in overflow or data loss exceptions while translating to or from the database. Carefully review the Type Mapping Run-time Behavior Matrix in SQL-CLR Type Mapping before making any customizations. In order for your type mapping customizations to be recognized by SQLMetal or O/R Designer, you need to make sure that these tools are supplied with the path to your custom DBML file when you generate your code file or external mapping file. Although not required for type mapping customization, it is recommended that you always separate your type mapping information from your code file and generate the additional external type mapping file. Doing so will leave some flexibility by not requiring that the code file be recompiled. Incorporating Database ChangesWhen your database changes, you will need to update your DBML file to reflect those changes. One way to do this is to automatically create a new DBML file and then re-do your type mapping customizations. Alternatively, you could compare the differences between your new DBML file and your customized DBML file and update your custom DBML file manually to reflect the database change. See alsoUser-Defined FunctionsLINQ to SQL uses methods in your object model to represent user-defined functions. You designate methods as functions by applying the FunctionAttribute attribute and, where required, the ParameterAttribute attribute. For more information, see The LINQ to SQL Object Model. To avoid an InvalidOperationException, user-defined functions in LINQ to SQL must be in one of the following forms:
The topics in this section show how to form and call these methods in your application if you write the code yourself. Developers using Visual Studio would typically use the Object Relational Designer to map user-defined functions. In This Section
How to: Use Scalar-Valued User-Defined Functions
How to: Use Table-Valued User-Defined Functions
How to: Call User-Defined Functions Inline How to: Use Scalar-Valued User-Defined FunctionsYou can map a client method defined on a class to a user-defined function by using the FunctionAttribute attribute. Note that the body of the method constructs an expression that captures the intent of the method call, and passes that expression to the DataContext for translation and execution. Note Direct execution occurs only if the function is called outside a query. For more information, see How to: Call User-Defined Functions Inline. ExampleThe following SQL code presents a scalar-valued user-defined function ReverseCustName(). CREATE FUNCTION ReverseCustName(@string varchar(100)) RETURNS varchar(100) AS BEGIN DECLARE @custName varchar(100) -- Implementation left as exercise for users. RETURN @custName END You would map a client method such as the following for this code: C#[Function(Name = "dbo.ReverseCustName", IsComposable = true)] [return: Parameter(DbType = "VarChar(100)")] public string ReverseCustName([Parameter(Name = "string", DbType = "VarChar(100)")] string @string) { return ((string)(this.ExecuteMethodCall(this, ((MethodInfo)(MethodInfo.GetCurrentMethod())), @string).ReturnValue)); } See alsoHow to: Use Table-Valued User-Defined FunctionsA table-valued function returns a single rowset (unlike stored procedures, which can return multiple result shapes). Because the return type of a table-valued function is Table, you can use a table-valued function anywhere in SQL that you can use a table. You can also treat the table-valued function just as you would a table. ExampleThe following SQL function explicitly states that it returns a TABLE. Therefore, the returned rowset structure is implicitly defined. CREATE FUNCTION ProductsCostingMoreThan(@cost money) RETURNS TABLE AS RETURN SELECT ProductID, UnitPrice FROM Products WHERE UnitPrice > @cost LINQ to SQL maps the function as follows: C#[Function(Name="dbo.ProductsCostingMoreThan", IsComposable=true)] public IQueryable<ProductsCostingMoreThanResult> ProductsCostingMoreThan([Parameter(DbType="Money")] System.Nullable<decimal> cost) { return this.CreateMethodCallQuery<ProductsCostingMoreThanResult>(this, ((MethodInfo)(MethodInfo.GetCurrentMethod())), cost); } ExampleThe following SQL code shows that you can join to the table that the function returns and otherwise treat it as you would any other table: SELECT p2.ProductName, p1.UnitPrice FROM dbo.ProductsCostingMoreThan(80.50) AS p1 INNER JOIN Products AS p2 ON p1.ProductID = p2.ProductID In LINQ to SQL, the query would be rendered as follows: C#var q = from p in db.ProductsCostingMoreThan(80.50m) join s in db.Products on p.ProductID equals s.ProductID select new { p.ProductID, s.UnitPrice }; See alsoHow to: Call User-Defined Functions InlineAlthough you can call user-defined functions inline, functions that are included in a query whose execution is deferred are not executed until the query is executed. For more information, see Introduction to LINQ Queries (C#). When you call the same function outside a query, LINQ to SQL creates a simple query from the method call expression. The following is the SQL syntax (the parameter @p0 is bound to the constant passed in): SELECT dbo.ReverseCustName(@p0) LINQ to SQL creates the following: C#string str = db.ReverseCustName("LINQ to SQL"); ExampleIn the following LINQ to SQL query, you can see an inline call to the generated user-defined function method ReverseCustName. The function is not executed immediately because query execution is deferred. The SQL built for this query translates to a call to the user-defined function in the database (see the SQL code following the query). C#var custQuery = from cust in db.Customers select new {cust.ContactName, Title = db.ReverseCustName(cust.ContactTitle)}; SELECT [t0].[ContactName], dbo.ReverseCustName([t0].[ContactTitle]) AS [Title] FROM [Customers] AS [t0] See alsoReferenceThis section provides reference information for LINQ to SQL developers. You are also encouraged to search Microsoft Docs for specific issues, and especially to participate in the LINQ Forum, where you can discuss more complex topics in detail with experts. In addition, you can study a white paper detailing LINQ to SQL technology, complete with Visual Basic and C# code examples. For more information, see LINQ to SQL: .NET Language-Integrated Query for Relational Data. In This Section
Data Types and Functions
Attribute-Based Mapping
Code Generation in LINQ to SQL
External Mapping
Frequently Asked Questions
SQL Server Compact and LINQ to SQL
Standard Query Operator Translation Related Sections
LINQ to SQL
Language-Integrated Query (LINQ) - C#
LinqDataSource Web Server Control Overview ReferenceThis section provides reference information for LINQ to SQL developers. You are also encouraged to search Microsoft Docs for specific issues, and especially to participate in the LINQ Forum, where you can discuss more complex topics in detail with experts. In addition, you can study a white paper detailing LINQ to SQL technology, complete with Visual Basic and C# code examples. For more information, see LINQ to SQL: .NET Language-Integrated Query for Relational Data. In This Section
Data Types and Functions
Attribute-Based Mapping
Code Generation in LINQ to SQL
External Mapping
Frequently Asked Questions
SQL Server Compact and LINQ to SQL
Standard Query Operator Translation Related Sections
LINQ to SQL
Language-Integrated Query (LINQ) - C#
LinqDataSource Web Server Control Overview Data Types and FunctionsThe topics listed in the following table describe LINQ to SQL support for members, constructs, and casts of the common language runtime (CLR). Supported members and constructs are available to use in your LINQ to SQL queries. An unsupported item in the table means that LINQ to SQL cannot translate the CLR member, construct, or cast for execution on the SQL Server. You may still be able to use them in your code, but they must be evaluated before the query is translated to Transact-SQL or after the results have been retrieved from the database.
See alsoSQL-CLR Type MappingIn LINQ to SQL, the data model of a relational database maps to an object model that is expressed in the programming language of your choice. When the application runs, LINQ to SQL translates the language-integrated queries in the object model into SQL and sends them to the database for execution. When the database returns the results, LINQ to SQL translates the results back to objects that you can work with in your own programming language. In order to translate data between the object model and the database, a type mapping must be defined. LINQ to SQL uses a type mapping to match each common language runtime (CLR) type with a particular SQL Server type. You can define type mappings and other mapping information, such as database structure and table relationships, inside the object model with attribute-based mapping. Alternatively, you can specify the mapping information outside the object model with an external mapping file. For more information, see Attribute-Based Mapping and External Mapping. This topic discusses the following points: Default Type MappingYou can create the object model or external mapping file automatically with the Object Relational Designer (O/R Designer) or the SQLMetal command-line tool. The default type mappings for these tools define which CLR types are chosen to map to columns inside the SQL Server database. For more information about using these tools, see Creating the Object Model. You can also use the CreateDatabase method to create a SQL Server database based on the mapping information in the object model or external mapping file. The default type mappings for the CreateDatabase method define which type of SQL Server columns are created to map to the CLR types in the object model. For more information, see How to: Dynamically Create a Database. Type Mapping Run-time Behavior MatrixThe following diagram shows the expected run-time behavior of specific type mappings when data is retrieved from or saved to the database. With the exception of serialization, LINQ to SQL does not support mapping between any CLR or SQL Server data types that are not specified in this matrix. For more information on serialization support, see Binary Serialization.
Note Some type mappings may result in overflow or data loss exceptions while translating to or from the database. Custom Type MappingWith LINQ to SQL, you are not limited to the default type mappings used by the O/R Designer, SQLMetal, and the CreateDatabase method. You can create custom type mappings by explicitly specifying them in a DBML file. Then you can use that DBML file to create the object model code and mapping file. For more information, see SQL-CLR Custom Type Mappings. Behavior Differences Between CLR and SQL ExecutionBecause of differences in precision and execution between the CLR and SQL Server, you may receive different results or experience different behavior depending on where you perform your calculations. Calculations performed in LINQ to SQL queries are actually translated to Transact-SQL and then executed on the SQL Server database. Calculations performed outside LINQ to SQL queries are executed within the context of the CLR. For example, the following are some differences in behavior between the CLR and SQL Server:
Enum MappingLINQ to SQL supports mapping the CLR System.Enum type to SQL Server types in two ways:
Note When mapping SQL text types to a CLR System.Enum, include only the names of the Enum members in the mapped SQL column. Other values are not supported in the Enum-mapped SQL column. The O/R Designer and SQLMetal command-line tool cannot automatically map a SQL type to a CLR Enum class. You must explicitly configure this mapping by customizing a DBML file for use by the O/R Designer and SQLMetal. For more information about custom type mapping, see SQL-CLR Custom Type Mappings. Because a SQL column intended for enumeration will be of the same type as other numeric and text columns; these tools will not recognize your intent and default to mapping as described in the following Numeric Mapping and Text and XML Mapping sections. For more information about generating code with the DBML file, see Code Generation in LINQ to SQL. The DataContext.CreateDatabase method creates a SQL column of numeric type to map a CLR System.Enum type. Numeric MappingLINQ to SQL lets you map many CLR and SQL Server numeric types. The following table shows the CLR types that O/R Designer and SQLMetal select when building an object model or external mapping file based on your database.
The next table shows the default type mappings used by the DataContext.CreateDatabase method to define which type of SQL columns are created to map to the CLR types defined in your object model or external mapping file.
There are many other numeric mappings you can choose, but some may result in overflow or data loss exceptions while translating to or from the database. For more information, see the Type Mapping Run Time Behavior Matrix. Decimal and Money TypesThe default precision of SQL Server DECIMAL type (18 decimal digits to the left and right of the decimal point) is much smaller than the precision of the CLR System.Decimal type that it is paired with by default. This can result in precision loss when you save data to the database. However, just the opposite can happen if the SQL Server DECIMAL type is configured with greater than 29 digits of precision. When a SQL Server DECIMAL type has been configured with a greater precision than the CLR System.Decimal, precision loss can occur when retrieving data from the database. The SQL Server MONEY and SMALLMONEY types, which are also paired with the CLR System.Decimal type by default, have a much smaller precision, which can result in overflow or data loss exceptions when saving data to the database. Text and XML MappingThere are also many text-based and XML types that you can map with LINQ to SQL. The following table shows the CLR types that O/R Designer and SQLMetal select when building an object model or external mapping file based on your database.
The next table shows the default type mappings used by the DataContext.CreateDatabase method to define which type of SQL columns are created to map to the CLR types defined in your object model or external mapping file.
There are many other text-based and XML mappings you can choose, but some may result in overflow or data loss exceptions while translating to or from the database. For more information, see the Type Mapping Run Time Behavior Matrix. XML TypesThe SQL Server XML data type is available starting in Microsoft SQL Server 2005. You can map the SQL Server XML data type to XElement, XDocument, or String. If the column stores XML fragments that cannot be read into XElement, the column must be mapped to String to avoid run-time errors. XML fragments that must be mapped to String include the following:
Although you can map XElement and XDocument to SQL Server as shown in the Type Mapping Run Time Behavior Matrix, the DataContext.CreateDatabase method has no default SQL Server type mapping for these types. Custom TypesIf a class implements Parse() and ToString(), you can map the object to any SQL text type (CHAR, NCHAR, VARCHAR, NVARCHAR, TEXT, NTEXT, XML). The object is stored in the database by sending the value returned by ToString() to the mapped database column. The object is reconstructed by invoking Parse() on the string returned by the database. Note LINQ to SQL does not support serialization by using System.Xml.Serialization.IXmlSerializable. Date and Time MappingWith LINQ to SQL, you can map many SQL Server date and time types. The following table shows the CLR types that O/R Designer and SQLMetal select when building an object model or external mapping file based on your database.
The next table shows the default type mappings used by the DataContext.CreateDatabase method to define which type of SQL columns are created to map to the CLR types defined in your object model or external mapping file.
There are many other date and time mappings you can choose, but some may result in overflow or data loss exceptions while translating to or from the database. For more information, see the Type Mapping Run Time Behavior Matrix. Note The SQL Server types DATETIME2, DATETIMEOFFSET, DATE, and TIME are available starting with Microsoft SQL Server 2008. LINQ to SQL supports mapping to these new types starting with the .NET Framework version 3.5 SP1. System.DatetimeThe range and precision of the CLR System.DateTime type is greater than the range and precision of the SQL Server DATETIME type, which is the default type mapping for the DataContext.CreateDatabase method. To help avoid exceptions related to dates outside the range of DATETIME, use DATETIME2, which is available starting with Microsoft SQL Server 2008. DATETIME2 can match the range and precision of the CLR System.DateTime. SQL Server dates have no concept of TimeZone, a feature that is richly supported in the CLR. TimeZone values are saved as is to the database without TimeZone conversion, regardless of the original DateTimeKind information. When DateTime values are retrieved from the database, their value is loaded as is into a DateTime with a DateTimeKind of Unspecified. For more information about supported System.DateTime methods, see System.DateTime Methods. System.TimeSpanMicrosoft SQL Server 2008 and the .NET Framework 3.5 SP1 let you map the CLR System.TimeSpan type to the SQL Server TIME type. However, there is a large difference between the range that the CLR System.TimeSpan supports and what the SQL Server TIME type supports. Mapping values less than 0 or greater than 23:59:59.9999999 hours to the SQL TIME will result in overflow exceptions. For more information, see System.TimeSpan Methods. In Microsoft SQL Server 2000 and SQL Server 2005, you cannot map database fields to TimeSpan. However, operations on TimeSpan are supported because TimeSpan values can be returned from DateTime subtraction or introduced into an expression as a literal or bound variable. Binary MappingThere are many SQL Server types that can map to the CLR type System.Data.Linq.Binary. The following table shows the SQL Server types that cause O/R Designer and SQLMetal to define a CLR System.Data.Linq.Binary type when building an object model or external mapping file based on your database.
The next table shows the default type mappings used by the DataContext.CreateDatabase method to define which type of SQL columns are created to map to the CLR types defined in your object model or external mapping file.
There are many other binary mappings you can choose, but some may result in overflow or data loss exceptions while translating to or from the database. For more information, see the Type Mapping Run Time Behavior Matrix. SQL Server FILESTREAMThe FILESTREAM attribute for VARBINARY(MAX) columns is available starting with Microsoft SQL Server 2008; you can map to it with LINQ to SQL starting with the .NET Framework version 3.5 SP1. Although you can map VARBINARY(MAX) columns with the FILESTREAM attribute to Binary objects, the DataContext.CreateDatabase method is unable to automatically create columns with the FILESTREAM attribute. For more information about FILESTREAM, see FILESTREAM Overview on Microsoft SQL Server Books Online. Binary SerializationIf a class implements the ISerializable interface, you can serialize an object to any SQL binary field (BINARY, VARBINARY, IMAGE). The object is serialized and deserialized according to how the ISerializable interface is implemented. For more information, see Binary Serialization. Miscellaneous MappingThe following table shows the default type mappings for some miscellaneous types that have not yet been mentioned. The following table shows the CLR types that O/R Designer and SQLMetal select when building an object model or external mapping file based on your database.
The next table shows the default type mappings used by the DataContext.CreateDatabase method to define which type of SQL columns are created to map to the CLR types defined in your object model or external mapping file.
LINQ to SQL does not support any other type mappings for these miscellaneous types. For more information, see the Type Mapping Run Time Behavior Matrix. See alsoBasic Data TypesBecause LINQ to SQL queries translate to Transact-SQL before they are executed on the Microsoft SQL Server. LINQ to SQL supports much of the same built-in functionality that SQL Server does for basic data types. CastingImplicit or explicit casts are enabled from a source CLR type to a target CLR type if there is a similar valid conversion within SQL Server. For more information about CLR casting, see CType Function (Visual Basic) and Type-testing and conversion operators. After conversion, casts change the behavior of operations performed on a CLR expression to match the behavior of other CLR expressions that naturally map to the destination type. Casts are also translatable in the context of inheritance mapping. Objects can be cast to more specific entity subtypes so that their subtype-specific data can be accessed. Equality OperatorsLINQ to SQL supports the following equality operators on basic data types inside LINQ to SQL queries:
See alsoBoolean Data TypesBoolean operators work as expected in the common language runtime (CLR), except that short-circuiting behavior is not translated. For example, the Visual Basic AndAlso operator behaves like the And operator. The C# && operator behaves like the & operator. LINQ to SQL supports the following operators.
See alsoNull SemanticsThe following table provides links to various parts of the LINQ to SQL documentation where null (Nothing in Visual Basic) issues are discussed.
See alsoNumeric and Comparison OperatorsArithmetic and comparison operators work as expected in the common language runtime (CLR) except as follows:
Supported OperatorsLINQ to SQL supports the following operators.
See alsoSequence OperatorsGenerally speaking, LINQ to SQL does not support sequence operators that have one or more of the following qualities:
Differences from .NETAll supported sequence operators work as expected in the common language runtime (CLR) except for Average. Average returns a value of the same type as the type being averaged, whereas in the CLR Average always returns either a Double or a Decimal. If the source argument is explicitly cast to double / decimal or the selector casts to double / decimal, the resulting SQL will also have such a conversion and the result will be as expected. See alsoSystem.Convert MethodsLINQ to SQL does not support the following Convert methods.
See alsoSystem.DateTime MethodsThe following LINQ to SQL-supported methods, operators, and properties are available to use in LINQ to SQL queries. When a method, operator or property is unsupported, LINQ to SQL cannot translate the member for execution on the SQL Server. You may use these members in your code, however, they must be evaluated before the query is translated to Transact-SQL or after the results have been retrieved from the database. Supported System.DateTime MembersOnce mapped in the object model or external mapping file, LINQ to SQL allows you to call the following System.DateTime members inside LINQ to SQL queries.
Members Not Supported by LINQ to SQLThe following members are not supported inside LINQ to SQL queries. Method Translation ExampleAll methods supported by LINQ to SQL are translated to Transact-SQL before they are sent to SQL Server. For example, consider the following pattern. (dateTime1 – dateTime2).{Days, Hours, Milliseconds, Minutes, Months, Seconds, Years} When it is recognized, it is translated into a direct call to the SQL Server DATEDIFF function, as follows: DATEDIFF({DatePart}, @dateTime1, @dateTime2) SQLMethods Date and Time MethodsIn addition to the methods offered by the DateTime structure, LINQ to SQL offers the methods listed in the following table from the System.Data.Linq.SqlClient.SqlMethods class for working with date and time.
See alsoSystem.Math MethodsLINQ to SQL does not support the following Math methods. Differences from .NETThe .NET Framework has different rounding semantics from SQL Server. The Round method in the .NET Framework performs Banker's rounding, where numbers that ends in .5 round to the nearest even digit instead of to the next higher digit. For example, 2.5 rounds to 2, while 3.5 rounds to 4. (This technique helps avoid systematic bias toward higher values in large data transactions.) In SQL, the ROUND function instead always rounds away from 0. Therefore 2.5 rounds to 3, contrasted with its rounding to 2 in the .NET Framework. LINQ to SQL passes through to the SQL ROUND semantics and does not try to implement Banker's rounding. See alsoSystem.Object MethodsLINQ to SQL supports the following Object methods.
LINQ to SQL does not support the following Object methods.
Differences from .NETThe output of Object.ToString() for double uses SQL CONVERT(NVARCHAR(30), @x, 2) on SQL. SQL always uses 16 digits and scientific notation in this case (for example, "0.000000000000000e+000" for 0). As a result, Object.ToString() conversion does not produce the same string as Convert.ToString in the .NET Framework. See alsoSystem.String MethodsLINQ to SQL does not support the following String methods. Unsupported System.String Methods in GeneralUnsupported String methods in general:
Unsupported System.String Static MethodsUnsupported System.String Non-static Methods
Differences from .NET
See alsoSystem.TimeSpan MethodsMember support for System.TimeSpan greatly depends on the versions of the .NET Framework and Microsoft SQL Server that you are using. When a method, operator, or property is unsupported; it means that LINQ to SQL cannot translate the member for execution on the SQL Server. You may still be able to use these members in your code. However, they must be evaluated before the query is translated to Transact-SQL or after the results have been retrieved from the database. Previous LimitationsWhen using LINQ to SQL with versions of the .NET Framework prior to .NET Framework 3.5 SP1, you cannot map SQL Server database fields to System.TimeSpan. However, operations on TimeSpan are supported because TimeSpan values can be returned from DateTime subtraction or introduced into an expression as a literal or bound variable. Supported System.TimeSpan member supportThe following LINQ to SQL-supported methods, operators, and properties are available for you to use in your LINQ to SQL queries. Once mapped in the object model or external mapping file, LINQ to SQL allows you to call many of the System.TimeSpan members inside your LINQ to SQL queries.
Note The ability to map System.TimeSpan to a SQL TIME column with LINQ to SQL requires the .NET Framework 3.5 SP1 and beyond. The SQL TIME data type is only available in Microsoft SQL Server 2008 and beyond. Addition and SubtractionAlthough the CLR System.TimeSpan type does support addition and subtraction, the SQL TIME type does not. Because of this, your LINQ to SQL queries will generate errors if they attempt addition and subtraction when they are mapped to the SQL TIME type. You can find other considerations for working with SQL date and time types in SQL-CLR Type Mapping. See alsoSystem.DateTimeOffset MethodsOnce mapped in the object model or external mapping file, LINQ to SQL allows you to call most of the System.DateTimeOffset methods, operators, and properties from within your LINQ to SQL queries. The only methods not supported are those inherited from System.Object that do not make sense in the context of LINQ to SQL queries, such as: Finalize, GetHashCode, GetType, and MemberwiseClone. These methods are not supported because LINQ to SQL cannot translate them for execution on the SQL Server. Note The common language runtime (CLR) System.DateTimeOffset structure, and the ability to map it to a SQL DATETIMEOFFSET column with LINQ to SQL, requires the .NET Framework 3.5 SP1 or beyond. The SQL DATETIMEOFFSET column is only available in Microsoft SQL Server 2008 and beyond. SQLMethods Date and Time MethodsIn addition to the methods offered by the DateTimeOffset structure, LINQ to SQL offers the methods listed in the following table from the System.Data.Linq.SqlClient.SqlMethods class for working with date and time.
See alsoAttribute-Based MappingLINQ to SQL maps a SQL Server database to a LINQ to SQL object model by either applying attributes or by using an external mapping file. This topic outlines the attribute-based approach. In its most elementary form, LINQ to SQL maps a database to a DataContext, a table to a class, and columns and relationships to properties on those classes. You can also use attributes to map an inheritance hierarchy in your object model. For more information, see How to: Generate the Object Model in Visual Basic or C#. Developers using Visual Studio typically perform attribute-based mapping by using the Object Relational Designer. You can also use the SQLMetal command-line tool, or you can hand-code the attributes yourself. For more information, see How to: Generate the Object Model in Visual Basic or C#. Note You can also map by using an external XML file. For more information, see External Mapping. The following sections describe attribute-based mapping in more detail. For more information, see the System.Data.Linq.Mapping namespace. DatabaseAttribute AttributeUse this attribute to specify the default name of the database when a name is not supplied by the connection. This attribute is optional, but if you use it, you must apply the Name property, as described in the following table.
For more information, see DatabaseAttribute. TableAttribute AttributeUse this attribute to designate a class as an entity class that is associated with a database table or view. LINQ to SQL treats classes that have this attribute as persistent classes. The following table describes the Name property.
For more information, see TableAttribute. ColumnAttribute AttributeUse this attribute to designate a member of an entity class to represent a column in a database table. You can apply this attribute to any field or property. Only those members you identify as columns are retrieved and persisted when LINQ to SQL saves changes to the database. Members without this attribute are assumed to be non-persistent and are not submitted for inserts or updates. The following table describes properties of this attribute.
For more information, see ColumnAttribute. Note AssociationAttribute and ColumnAttribute Storage property values are case sensitive. For example, ensure that values used in the attribute for the AssociationAttribute.Storage property match the case for the corresponding property names used elsewhere in the code. This applies to all .NET programming languages, even those which are not typically case sensitive, including Visual Basic. For more information about the Storage property, see DataAttribute.Storage. AssociationAttribute AttributeUse this attribute to designate a property to represent an association in the database, such as a foreign key to primary key relationship. For more information about relationships, see How to: Map Database Relationships. The following table describes properties of this attribute.
For more information, see AssociationAttribute. Note AssociationAttribute and ColumnAttribute Storage property values are case sensitive. For example, ensure that values used in the attribute for the AssociationAttribute.Storage property match the case for the corresponding property names used elsewhere in the code. This applies to all .NET programming languages, even those which are not typically case sensitive, including Visual Basic. For more information about the Storage property, see DataAttribute.Storage. InheritanceMappingAttribute AttributeUse this attribute to map an inheritance hierarchy. The following table describes properties of this attribute.
For more information, see InheritanceMappingAttribute. FunctionAttribute AttributeUse this attribute to designate a method as representing a stored procedure or user-defined function in the database. The following table describes the properties of this attribute.
For more information, see FunctionAttribute. ParameterAttribute AttributeUse this attribute to map input parameters on stored procedure methods. The following table describes properties of this attribute.
For more information, see ParameterAttribute. ResultTypeAttribute AttributeUse this attribute to specify a result type. The following table describes properties of this attribute.
For more information, see ResultTypeAttribute. DataAttribute AttributeUse this attribute to specify names and private storage fields. The following table describes properties of this attribute.
For more information, see DataAttribute. See alsoCode Generation in LINQ to SQLYou can generate code to represent a database by using either the Object Relational Designer or the SQLMetal command-line tool. In either case, end-to-end code generation occurs in three stages:
For more information, see SqlMetal.exe (Code Generation Tool). Developers using Visual Studio can also use the Object Relational Designer to generate code. See LINQ to SQL Tools in Visual Studio. DBML ExtractorThe DBML Extractor is a LINQ to SQL component that takes database metadata as input and produces a DBML file as output. Code GeneratorThe Code Generator is a LINQ to SQL component that translates DBML files to Visual Basic, C#, or XML mapping files. XML Schema Definition FileThe DBML file must be valid against the following schema definition as an XSD file. Distinguish this schema definition file from the schema definition file that is used to validate an external mapping file. For more information, see External Mapping). Note Visual Studio users will also find this XSD file in the XML Schemas dialog box as "DbmlSchema.xsd". To use the XSD file correctly for validating a DBML file, see How to: Validate DBML and External Mapping Files. ?<?xml version="1.0" encoding="utf-16"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="http://schemas.microsoft.com/linqtosql/dbml/2007" xmlns="http://schemas.microsoft.com/linqtosql/dbml/2007" elementFormDefault="qualified" > <xs:element name="Database" type="Database" /> <xs:complexType name="Database"> <xs:sequence> <xs:element name="Connection" type="Connection" minOccurs="0" maxOccurs="1" /> <xs:element name="Table" type="Table" minOccurs="0" maxOccurs="unbounded" /> <xs:element name="Function" type="Function" minOccurs="0" maxOccurs="unbounded" /> </xs:sequence> <xs:attribute name="Name" type="xs:string" use="optional" /> <xs:attribute name="EntityNamespace" type="xs:string" use="optional" /> <xs:attribute name="ContextNamespace" type="xs:string" use="optional" /> <xs:attribute name="Class" type="xs:string" use="optional" /> <xs:attribute name="AccessModifier" type="AccessModifier" use="optional" /> <xs:attribute name="Modifier" type="ClassModifier" use="optional" /> <xs:attribute name="BaseType" type="xs:string" use="optional" /> <xs:attribute name="Provider" type="xs:string" use="optional" /> <xs:attribute name="ExternalMapping" type="xs:boolean" use="optional" /> <xs:attribute name="Serialization" type="SerializationMode" use="optional" /> <xs:attribute name="EntityBase" type="xs:string" use="optional" /> </xs:complexType> <xs:complexType name="Table"> <xs:all> <xs:element name="Type" type="Type" minOccurs="1" maxOccurs="1" /> <xs:element name="InsertFunction" type="TableFunction" minOccurs="0" maxOccurs="1" /> <xs:element name="UpdateFunction" type="TableFunction" minOccurs="0" maxOccurs="1" /> <xs:element name="DeleteFunction" type="TableFunction" minOccurs="0" maxOccurs="1" /> </xs:all> <xs:attribute name="Name" type="xs:string" use="required" /> <xs:attribute name="Member" type="xs:string" use="optional" /> <xs:attribute name="AccessModifier" type="AccessModifier" use="optional" /> <xs:attribute name="Modifier" type="MemberModifier" use="optional" /> </xs:complexType> <xs:complexType name="Type"> <xs:sequence> <xs:choice minOccurs="0" maxOccurs="unbounded"> <xs:element name="Column" type="Column" minOccurs="0" maxOccurs="unbounded" /> <xs:element name="Association" type="Association" minOccurs="0" maxOccurs="unbounded" /> </xs:choice> <xs:element name="Type" type="Type" minOccurs="0" maxOccurs="unbounded" /> </xs:sequence> <xs:attribute name="IdRef" type="xs:IDREF" use="optional" /> <xs:attribute name="Id" type="xs:ID" use="optional" /> <xs:attribute name="Name" type="xs:string" use="optional" /> <xs:attribute name="InheritanceCode" type="xs:string" use="optional" /> <xs:attribute name="IsInheritanceDefault" type="xs:boolean" use="optional" /> <xs:attribute name="AccessModifier" type="AccessModifier" use="optional" /> <xs:attribute name="Modifier" type="ClassModifier" use="optional" /> </xs:complexType> <xs:complexType name="Column"> <xs:attribute name="Name" type="xs:string" use="optional" /> <xs:attribute name="Member" type="xs:string" use="optional" /> <xs:attribute name="Storage" type="xs:string" use="optional" /> <xs:attribute name="AccessModifier" type="AccessModifier" use="optional" /> <xs:attribute name="Modifier" type="MemberModifier" use="optional" /> <xs:attribute name="Type" type="xs:string" use="required" /> <xs:attribute name="DbType" type="xs:string" use="optional" /> <xs:attribute name="IsReadOnly" type="xs:boolean" use="optional" /> <xs:attribute name="IsPrimaryKey" type="xs:boolean" use="optional" /> <xs:attribute name="IsDbGenerated" type="xs:boolean" use="optional" /> <xs:attribute name="CanBeNull" type="xs:boolean" use="optional" /> <xs:attribute name="UpdateCheck" type="UpdateCheck" use="optional" /> <xs:attribute name="IsDiscriminator" type="xs:boolean" use="optional" /> <xs:attribute name="Expression" type="xs:string" use="optional" /> <xs:attribute name="IsVersion" type="xs:boolean" use="optional" /> <xs:attribute name="IsDelayLoaded" type="xs:boolean" use="optional" /> <xs:attribute name="AutoSync" type="AutoSync" use="optional" /> </xs:complexType> <xs:complexType name="Association"> <xs:attribute name="Name" type="xs:string" use="required" /> <xs:attribute name="Member" type="xs:string" use="required" /> <xs:attribute name="Storage" type="xs:string" use="optional" /> <xs:attribute name="AccessModifier" type="AccessModifier" use="optional" /> <xs:attribute name="Modifier" type="MemberModifier" use="optional" /> <xs:attribute name="Type" type="xs:string" use="required" /> <xs:attribute name="ThisKey" type="xs:string" use="optional" /> <xs:attribute name="OtherKey" type="xs:string" use="optional" /> <xs:attribute name="IsForeignKey" type="xs:boolean" use="optional" /> <xs:attribute name="Cardinality" type="Cardinality" use="optional" /> <xs:attribute name="DeleteRule" type="xs:string" use="optional" /> <xs:attribute name="DeleteOnNull" type="xs:boolean" use="optional" /> </xs:complexType> <xs:complexType name="Function"> <xs:sequence> <xs:element name="Parameter" type="Parameter" minOccurs="0" maxOccurs="unbounded" /> <xs:choice> <xs:element name="ElementType" type="Type" minOccurs="0" maxOccurs="unbounded" /> <xs:element name="Return" type="Return" minOccurs="0" maxOccurs="1" /> </xs:choice> </xs:sequence> <xs:attribute name="Name" type="xs:string" use="required" /> <xs:attribute name="Id" type="xs:ID" use="optional" /> <xs:attribute name="Method" type="xs:string" use="optional" /> <xs:attribute name="AccessModifier" type="AccessModifier" use="optional" /> <xs:attribute name="Modifier" type="MemberModifier" use="optional" /> <xs:attribute name="HasMultipleResults" type="xs:boolean" use="optional" /> <xs:attribute name="IsComposable" type="xs:boolean" use="optional" /> </xs:complexType> <xs:complexType name="TableFunction"> <xs:sequence> <xs:element name="Argument" type="TableFunctionParameter" minOccurs="0" maxOccurs="unbounded" /> <xs:element name="Return" type="TableFunctionReturn" minOccurs="0" maxOccurs="1" /> </xs:sequence> <xs:attribute name="FunctionId" type="xs:IDREF" use="required" /> <xs:attribute name="AccessModifier" type="AccessModifier" use="optional" /> </xs:complexType> <xs:complexType name="Parameter"> <xs:attribute name="Name" type="xs:string" use="required" /> <xs:attribute name="Parameter" type="xs:string" use="optional" /> <xs:attribute name="Type" type="xs:string" use="required" /> <xs:attribute name="DbType" type="xs:string" use="optional" /> <xs:attribute name="Direction" type="ParameterDirection" use="optional" /> </xs:complexType> <xs:complexType name="Return"> <xs:attribute name="Type" type="xs:string" use="required" /> <xs:attribute name="DbType" type="xs:string" use="optional" /> </xs:complexType> <xs:complexType name="TableFunctionParameter"> <xs:attribute name="Parameter" type="xs:string" use="required" /> <xs:attribute name="Member" type="xs:string" use="required" /> <xs:attribute name="Version" type="Version" use="optional" /> </xs:complexType> <xs:complexType name="TableFunctionReturn"> <xs:attribute name="Member" type="xs:string" use="required" /> </xs:complexType> <xs:complexType name="Connection"> <xs:attribute name="Provider" type="xs:string" use="required" /> <xs:attribute name="Mode" type="ConnectionMode" use="optional" /> <xs:attribute name="ConnectionString" type="xs:string" use="optional" /> <xs:attribute name="SettingsObjectName" type="xs:string" use="optional" /> <xs:attribute name="SettingsPropertyName" type="xs:string" use="optional" /> </xs:complexType> <xs:simpleType name="ConnectionMode"> <xs:restriction base="xs:string"> <xs:enumeration value="ConnectionString" /> <xs:enumeration value="AppSettings" /> <xs:enumeration value="WebSettings" /> </xs:restriction> </xs:simpleType> <xs:simpleType name="AccessModifier"> <xs:restriction base="xs:string"> <xs:enumeration value="Public" /> <xs:enumeration value="Internal" /> <xs:enumeration value="Protected" /> <xs:enumeration value="ProtectedInternal" /> <xs:enumeration value="Private" /> </xs:restriction> </xs:simpleType> <xs:simpleType name="UpdateCheck"> <xs:restriction base="xs:string"> <xs:enumeration value="Always" /> <xs:enumeration value="Never" /> <xs:enumeration value="WhenChanged" /> </xs:restriction> </xs:simpleType> <xs:simpleType name="SerializationMode"> <xs:restriction base="xs:string"> <xs:enumeration value="None" /> <xs:enumeration value="Unidirectional" /> </xs:restriction> </xs:simpleType> <xs:simpleType name="ParameterDirection"> <xs:restriction base="xs:string"> <xs:enumeration value="In" /> <xs:enumeration value="Out" /> <xs:enumeration value="InOut" /> </xs:restriction> </xs:simpleType> <xs:simpleType name="Version"> <xs:restriction base="xs:string"> <xs:enumeration value="Current" /> <xs:enumeration value="Original" /> </xs:restriction> </xs:simpleType> <xs:simpleType name="AutoSync"> <xs:restriction base="xs:string"> <xs:enumeration value="Never" /> <xs:enumeration value="OnInsert" /> <xs:enumeration value="OnUpdate" /> <xs:enumeration value="Always" /> <xs:enumeration value="Default" /> </xs:restriction> </xs:simpleType> <xs:simpleType name="ClassModifier"> <xs:restriction base="xs:string"> <xs:enumeration value="Sealed" /> <xs:enumeration value="Abstract" /> </xs:restriction> </xs:simpleType> <xs:simpleType name="MemberModifier"> <xs:restriction base="xs:string"> <xs:enumeration value="Virtual" /> <xs:enumeration value="Override" /> <xs:enumeration value="New" /> <xs:enumeration value="NewVirtual" /> </xs:restriction> </xs:simpleType> <xs:simpleType name="Cardinality"> <xs:restriction base="xs:string"> <xs:enumeration value="One" /> <xs:enumeration value="Many" /> </xs:restriction> </xs:simpleType> </xs:schema> Sample DBML FileThe following code is an excerpt from the DBML file created from the Northwind sample database. You can generate the whole file by using SQLMetal with the /xml option. For more information, see SqlMetal.exe (Code Generation Tool). XML<?xml version="1.0" encoding="utf-16"?> <Database Name="northwnd" Class="Northwnd" xmlns="http://schemas.microsoft.com/dsltools/DLinqML"> <Table Name="Customers"> <Type Name="Customer"> <Column Name="CustomerID" Type="System.String" DbType="NChar(5) NOT NULL" IsPrimaryKey="True" CanBeNull="False" /> <Column Name="CompanyName" Type="System.String" DbType="NVarChar(40) NOT NULL" CanBeNull="False" /> <Column Name="ContactName" Type="System.String" DbType="NVarChar(30)" CanBeNull="True" /> <Column Name="ContactTitle" Type="System.String" DbType="NVarChar(30)" CanBeNull="True" /> <Column Name="Address" Type="System.String" DbType="NVarChar(60)" CanBeNull="True" /> <Column Name="City" Type="System.String" DbType="NVarChar(15)" CanBeNull="True" /> <Column Name="Region" Type="System.String" DbType="NVarChar(15)" CanBeNull="True" /> <Column Name="PostalCode" Type="System.String" DbType="NVarChar(10)" CanBeNull="True" /> <Column Name="Country" Type="System.String" DbType="NVarChar(15)" CanBeNull="True" /> <Column Name="Phone" Type="System.String" DbType="NVarChar(24)" CanBeNull="True" /> <Column Name="Fax" Type="System.String" DbType="NVarChar(24)" CanBeNull="True" /> <Association Name="FK_CustomerCustomerDemo_Customers" Member="CustomerCustomerDemos" ThisKey="CustomerID" OtherKey="CustomerID" OtherTable="CustomerCustomerDemo" DeleteRule="NO ACTION" /> <Association Name="FK_Orders_Customers" Member="Orders" ThisKey="CustomerID" OtherKey="CustomerID" OtherTable="Orders" DeleteRule="NO ACTION" /> </Type> </Table> </Database> See also
External MappingLINQ to SQL supports external mapping, a process by which you use a separate XML file to specify mapping between the data model of the database and your object model. Advantages of using an external mapping file include the following:
RequirementsThe mapping file must be an XML file, and the file must validate against a LINQ to SQL schema definition (.xsd) file. The following rules apply:
XML Schema Definition FileExternal mapping in LINQ to SQL must be valid against the following XML schema definition. Distinguish this schema definition file from the schema definition file that is used to validate a DBML file. For more information, see Code Generation in LINQ to SQL). Note Visual Studio users will also find this XSD file in the XML Schemas dialog box as "LinqToSqlMapping.xsd". To use this file correctly for validating an external mapping file, see How to: Validate DBML and External Mapping Files. ?<?xml version="1.0" encoding="utf-16"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="http://schemas.microsoft.com/linqtosql/mapping/2007" xmlns="http://schemas.microsoft.com/linqtosql/mapping/2007" elementFormDefault="qualified" > <xs:element name="Database" type="Database" /> <xs:complexType name="Database"> <xs:sequence> <xs:element name="Table" type="Table" minOccurs="0" maxOccurs="unbounded" /> <xs:element name="Function" type="Function" minOccurs="0" maxOccurs="unbounded" /> </xs:sequence> <xs:attribute name="Name" type="xs:string" use="optional" /> <xs:attribute name="Provider" type="xs:string" use="optional" /> </xs:complexType> <xs:complexType name="Table"> <xs:sequence> <xs:element name="Type" type="Type" minOccurs="1" maxOccurs="1" /> </xs:sequence> <xs:attribute name="Name" type="xs:string" use="optional" /> <xs:attribute name="Member" type="xs:string" use="optional" /> </xs:complexType> <xs:complexType name="Type"> <xs:sequence> <xs:choice minOccurs="0" maxOccurs="unbounded"> <xs:element name="Column" type="Column" minOccurs="0" maxOccurs="unbounded" /> <xs:element name="Association" type="Association" minOccurs="0" maxOccurs="unbounded" /> </xs:choice> <xs:element name="Type" type="Type" minOccurs="0" maxOccurs="unbounded" /> </xs:sequence> <xs:attribute name="Name" type="xs:string" use="required" /> <xs:attribute name="InheritanceCode" type="xs:string" use="optional" /> <xs:attribute name="IsInheritanceDefault" type="xs:boolean" use="optional" /> </xs:complexType> <xs:complexType name="Column"> <xs:attribute name="Name" type="xs:string" use="optional" /> <xs:attribute name="Member" type="xs:string" use="required" /> <xs:attribute name="Storage" type="xs:string" use="optional" /> <xs:attribute name="DbType" type="xs:string" use="optional" /> <xs:attribute name="IsPrimaryKey" type="xs:boolean" use="optional" /> <xs:attribute name="IsDbGenerated" type="xs:boolean" use="optional" /> <xs:attribute name="CanBeNull" type="xs:boolean" use="optional" /> <xs:attribute name="UpdateCheck" type="UpdateCheck" use="optional" /> <xs:attribute name="IsDiscriminator" type="xs:boolean" use="optional" /> <xs:attribute name="Expression" type="xs:string" use="optional" /> <xs:attribute name="IsVersion" type="xs:boolean" use="optional" /> <xs:attribute name="AutoSync" type="AutoSync" use="optional" /> </xs:complexType> <xs:complexType name="Association"> <xs:attribute name="Name" type="xs:string" use="optional" /> <xs:attribute name="Member" type="xs:string" use="required" /> <xs:attribute name="Storage" type="xs:string" use="optional" /> <xs:attribute name="ThisKey" type="xs:string" use="optional" /> <xs:attribute name="OtherKey" type="xs:string" use="optional" /> <xs:attribute name="IsForeignKey" type="xs:boolean" use="optional" /> <xs:attribute name="IsUnique" type="xs:boolean" use="optional" /> <xs:attribute name="DeleteRule" type="xs:string" use="optional" /> <xs:attribute name="DeleteOnNull" type="xs:boolean" use="optional" /> </xs:complexType> <xs:complexType name="Function"> <xs:sequence> <xs:element name="Parameter" type="Parameter" minOccurs="0" maxOccurs="unbounded" /> <xs:choice> <xs:element name="ElementType" type="Type" minOccurs="0" maxOccurs="unbounded" /> <xs:element name="Return" type="Return" minOccurs="0" maxOccurs="1" /> </xs:choice> </xs:sequence> <xs:attribute name="Name" type="xs:string" use="optional" /> <xs:attribute name="Method" type="xs:string" use="required" /> <xs:attribute name="IsComposable" type="xs:boolean" use="optional" /> </xs:complexType> <xs:complexType name="Parameter"> <xs:attribute name="Name" type="xs:string" use="optional" /> <xs:attribute name="Parameter" type="xs:string" use="required" /> <xs:attribute name="DbType" type="xs:string" use="optional" /> <xs:attribute name="Direction" type="ParameterDirection" use="optional" /> </xs:complexType> <xs:complexType name="Return"> <xs:attribute name="DbType" type="xs:string" use="optional" /> </xs:complexType> <xs:simpleType name="UpdateCheck"> <xs:restriction base="xs:string"> <xs:enumeration value="Always" /> <xs:enumeration value="Never" /> <xs:enumeration value="WhenChanged" /> </xs:restriction> </xs:simpleType> <xs:simpleType name="ParameterDirection"> <xs:restriction base="xs:string"> <xs:enumeration value="In" /> <xs:enumeration value="Out" /> <xs:enumeration value="InOut" /> </xs:restriction> </xs:simpleType> <xs:simpleType name="AutoSync"> <xs:restriction base="xs:string"> <xs:enumeration value="Never" /> <xs:enumeration value="OnInsert" /> <xs:enumeration value="OnUpdate" /> <xs:enumeration value="Always" /> <xs:enumeration value="Default" /> </xs:restriction> </xs:simpleType> </xs:schema> See alsoFrequently Asked QuestionsThe following sections answer some common issues that you might encounter when you implement LINQ. Additional issues are addressed in Troubleshooting. Cannot ConnectQ. I cannot connect to my database. A. Make sure your connection string is correct and that your SQL Server instance is running. Note also that LINQ to SQL requires the Named Pipes protocol to be enabled. For more information, see Learning by Walkthroughs. Changes to Database LostQ. I made a change to data in the database, but when I reran my application, the change was no longer there. A. Make sure that you call SubmitChanges to save results to the database. Database Connection: Open How Long?Q. How long does my database connection remain open? A. A connection typically remains open until you consume the query results. If you expect to take time to process all the results and are not opposed to caching the results, apply ToList to the query. In common scenarios where each object is processed only one time, the streaming model is superior in both DataReader and LINQ to SQL. The exact details of connection usage depend on the following:
Updating Without QueryingQ. Can I update table data without first querying the database? A. Although LINQ to SQL does not have set-based update commands, you can use either of the following techniques to update without first querying:
Unexpected Query ResultsQ. My query is returning unexpected results. How can I inspect what is occurring? A. LINQ to SQL provides several tools for inspecting the SQL code it generates. One of the most important is Log. For more information, see Debugging Support. Unexpected Stored Procedure ResultsQ. I have a stored procedure whose return value is calculated by MAX(). When I drag the stored procedure to the O/R Designer surface, the return value is not correct. A. LINQ to SQL provides two ways to return database-generated values by way of stored procedures:
The following is an example of incorrect output. Because LINQ to SQL cannot map the results, it always returns 0: create procedure proc2 as begin select max(i) from t where name like 'hello' end The following is an example of correct output by using an output parameter: create procedure proc2 @result int OUTPUT as select @result = MAX(i) from t where name like 'hello' go The following is an example of correct output by naming the output result: create procedure proc2 as begin select nax(i) AS MaxResult from t where name like 'hello' end For more information, see Customizing Operations By Using Stored Procedures. Serialization ErrorsQ. When I try to serialize, I get the following error: "Type 'System.Data.Linq.ChangeTracker+StandardChangeTracker' ... is not marked as serializable." A. Code generation in LINQ to SQL supports DataContractSerializer serialization. It does not support XmlSerializer or BinaryFormatter. For more information, see Serialization. Multiple DBML FilesQ. When I have multiple DBML files that share some tables in common, I get a compiler error. A. Set the Context Namespace and Entity Namespace properties from the Object Relational Designer to a distinct value for each DBML file. This approach eliminates the name/namespace collision. Avoiding Explicit Setting of Database-Generated Values on Insert or UpdateQ. I have a database table with a DateCreated column that defaults to SQL Getdate(). When I try to insert a new record by using LINQ to SQL, the value gets set to NULL. I would expect it to be set to the database default. A. LINQ to SQL handles this situation automatically for identity (auto-increment) and rowguidcol (database-generated GUID) and timestamp columns. In other cases, you should manually set IsDbGenerated=true and AutoSync=Always/OnInsert/OnUpdate properties. Multiple DataLoadOptionsQ. Can I specify additional load options without overwriting the first? A. Yes. The first is not overwritten, as in the following example: C#DataLoadOptions dlo = new DataLoadOptions(); dlo.LoadWith<Order>(o => o.Customer); dlo.LoadWith<Order>(o => o.OrderDetails); Errors Using SQL Compact 3.5Q. I get an error when I drag tables out of a SQL Server Compact 3.5 database. A. The Object Relational Designer does not support SQL Server Compact 3.5, although the LINQ to SQL runtime does. In this situation, you must create your own entity classes and add the appropriate attributes. Errors in Inheritance RelationshipsQ. I used the toolbox inheritance shape in the Object Relational Designer to connect two entities, but I get errors. A. Creating the relationship is not enough. You must provide information such as the discriminator column, base class discriminator value, and derived class discriminator value. Provider ModelQ. Is a public provider model available? A. No public provider model is available. At this time, LINQ to SQL supports SQL Server and SQL Server Compact 3.5 only. SQL-Injection AttacksQ. How is LINQ to SQL protected from SQL-injection attacks? A. SQL injection has been a significant risk for traditional SQL queries formed by concatenating user input. LINQ to SQL avoids such injection by using SqlParameter in queries. User input is turned into parameter values. This approach prevents malicious commands from being used from customer input. Changing Read-only Flag in DBML FilesQ. How do I eliminate setters from some properties when I create an object model from a DBML file? A. Take the following steps for this advanced scenario:
APTCAQ. Is System.Data.Linq marked for use by partially trusted code? A. Yes, the System.Data.Linq.dll assembly is among those .NET Framework assemblies marked with the AllowPartiallyTrustedCallersAttribute attribute. Without this marking, assemblies in the .NET Framework are intended for use only by fully trusted code. The principal scenario in LINQ to SQL for allowing partially trusted callers is to enable the LINQ to SQL assembly to be accessed from Web applications, where the trust configuration is Medium. Mapping Data from Multiple TablesQ. The data in my entity comes from multiple tables. How do I map it? A. You can create a view in a database and map the entity to the view. LINQ to SQL generates the same SQL for views as it does for tables. Note The use of views in this scenario has limitations. This approach works most safely when the operations performed on Table<TEntity> are supported by the underlying view. Only you know which operations are intended. For example, most applications are read-only, and another sizeable number perform Create/Update/Delete operations only by using stored procedures against views. Connection PoolingQ. Is there a construct that can help with DataContext pooling? A. Do not try to reuse instances of DataContext. Each DataContext maintains state (including an identity cache) for one particular edit/query session. To obtain new instances based on the current state of the database, use a new DataContext. You can still use underlying ADO.NET connection pooling. For more information, see SQL Server Connection Pooling (ADO.NET). Second DataContext Is Not UpdatedQ. I used one instance of DataContext to store values in the database. However, a second DataContext on the same database does not reflect the updated values. The second DataContext instance seems to return cached values. A. This behavior is by design. LINQ to SQL continues to return the same instances/values that you saw in the first instance. When you make updates, you use optimistic concurrency. The original data is used to check against the current database state to assert that it is in fact still unchanged. If it has changed, a conflict occurs and your application must resolve it. One option of your application is to reset the original state to the current database state and to try the update again. For more information, see How to: Manage Change Conflicts. You can also set ObjectTrackingEnabled to false, which turns off caching and change tracking. You can then retrieve the latest values every time that you query. Cannot Call SubmitChanges in Read-only ModeQ. When I try to call SubmitChanges in read-only mode, I get an error. A. Read-only mode turns off the ability of the context to track changes. See alsoSQL Server Compact and LINQ to SQLSQL Server Compact is the default database installed with Visual Studio. For more information, see Using SQL Server Compact (Visual Studio). This topic outlines the key differences in usage, configuration, feature sets, and scope of LINQ to SQL support. Characteristics of SQL Server Compact in Relation to LINQ to SQLBy default, SQL Server Compact is installed for all Visual Studio editions, and is therefore available on the development computer for use with LINQ to SQL. But deployment of an application that uses SQL Server Compact and LINQ to SQL differs from that for a SQL Server application. SQL Server Compact is not a part of the .NET Framework, and therefore must be packaged with the application or downloaded separately from the Microsoft site. Note the following characteristics:
Feature SetThe SQL Server Compact feature set is much simpler than the feature set of SQL Server in the following ways that can affect LINQ to SQL applications :
See alsoStandard Query Operator TranslationLINQ to SQL translates Standard Query Operators to SQL commands. The query processor of the database determines the execution semantics of SQL translation. Standard Query Operators are defined against sequences. A sequence is ordered and relies on reference identity for each element of the sequence. For more information, see Standard Query Operators Overview (C#) or Standard Query Operators Overview (Visual Basic). SQL deals primarily with unordered sets of values. Ordering is typically an explicitly stated, post-processing operation that is applied to the final result of a query rather than to intermediate results. Identity is defined by values. For this reason, SQL queries are understood to deal with multisets (bags) instead of sets. The following paragraphs describe the differences between the Standard Query Operators and their SQL translation for the SQL Server provider for LINQ to SQL. Operator SupportConcatThe Concat method is defined for ordered multisets where the order of the receiver and the order of the argument are the same. Concat works as UNION ALL over the multisets followed by the common order. The final step is ordering in SQL before results are produced. Concat does not preserve the order of its arguments. To ensure appropriate ordering, you must explicitly order the results of Concat. Intersect, Except, UnionThe Intersect and Except methods are well defined only on sets. The semantics for multisets is undefined. The Union method is defined for multisets as the unordered concatenation of the multisets (effectively the result of the UNION ALL clause in SQL). Take, SkipTake and Skip methods are well defined only against ordered sets. The semantics for unordered sets or multisets are undefined. Note Take and Skip have certain limitations when they are used in queries against SQL Server 2000. For more information, see the "Skip and Take Exceptions in SQL Server 2000" entry in Troubleshooting. Because of limitations on ordering in SQL, LINQ to SQL tries to move the ordering of the argument of these methods to the result of the method. For example, consider the following LINQ to SQL query: C#var custQuery = (from cust in db.Customers where cust.City == "London" orderby cust.CustomerID select cust).Skip(1).Take(1); The generated SQL for this code moves the ordering to the end, as follows: SQLSELECT TOP 1 [t0].[CustomerID], [t0].[CompanyName], FROM [Customers] AS [t0] WHERE (NOT (EXISTS( SELECT NULL AS [EMPTY] FROM ( SELECT TOP 1 [t1].[CustomerID] FROM [Customers] AS [t1] WHERE [t1].[City] = @p0 ORDER BY [t1].[CustomerID] ) AS [t2] WHERE [t0].[CustomerID] = [t2].[CustomerID] ))) AND ([t0].[City] = @p1) ORDER BY [t0].[CustomerID] It becomes obvious that all the specified ordering must be consistent when Take and Skip are chained together. Otherwise, the results are undefined. Both Take and Skip are well-defined for non-negative, constant integral arguments based on the Standard Query Operator specification. Operators with No TranslationThe following methods are not translated by LINQ to SQL. The most common reason is the difference between unordered multisets and sequences.
Expression TranslationNull semanticsLINQ to SQL does not impose null comparison semantics on SQL. Comparison operators are syntactically translated to their SQL equivalents. For this reason, the semantics reflect SQL semantics that are defined by server or connection settings. For example, two null values are considered unequal under default SQL Server settings, but you can change the settings to change the semantics. LINQ to SQL does not consider server settings when it translates queries. A comparison with the literal null is translated to the appropriate SQL version (is null or is not null). The value of null in collation is defined by SQL Server. LINQ to SQL does not change the collation. AggregatesThe Standard Query Operator aggregate method Sum evaluates to zero for an empty sequence or for a sequence that contains only nulls. In LINQ to SQL, the semantics of SQL are left unchanged, and Sum evaluates to null instead of zero for an empty sequence or for a sequence that contains only nulls. SQL limitations on intermediate results apply to aggregates in LINQ to SQL. The Sum of 32-bit integer quantities is not computed by using 64-bit results. Overflow might occur for a LINQ to SQL translation of Sum, even if the Standard Query Operator implementation does not cause an overflow for the corresponding in-memory sequence. Likewise, the LINQ to SQL translation of Average of integer values is computed as an integer, not as a double. Entity ArgumentsLINQ to SQL enables entity types to be used in the GroupBy and OrderBy methods. In the translation of these operators, the use of an argument of a type is considered to be the equivalent to specifying all members of that type. For example, the following code is equivalent: C#db.Customers.GroupBy(c => c); db.Customers.GroupBy(c => new { c.CustomerID, c.ContactName }); Equatable / Comparable ArgumentsEquality of arguments is required in the implementation of the following methods: LINQ to SQL supports equality and comparison for flat arguments, but not for arguments that are or contain sequences. A flat argument is a type that can be mapped to a SQL row. A projection of one or more entity types that can be statically determined not to contain a sequence is considered a flat argument. Following are examples of flat arguments: C#db.Customers.Select(c => c); db.Customers.Select(c => new { c.CustomerID, c.City }); db.Orders.Select(o => new { o.OrderID, o.Customer.City }); db.Orders.Select(o => new { o.OrderID, o.Customer }); The following are examples of non-flat (hierarchical) arguments. C#// In the following line, c.Orders is a sequence. db.Customers.Select(c => new { c.CustomerID, c.Orders }); // In the following line, the result has a sequence. db.Customers.GroupBy(c => c.City); Visual Basic Function TranslationThe following helper functions that are used by the Visual Basic compiler are translated to corresponding SQL operators and functions:
Conversion methods:
Inheritance SupportInheritance Mapping RestrictionsFor more information, see How to: Map Inheritance Hierarchies. Inheritance in QueriesC# casts are supported only in projection. Casts that are used elsewhere are not translated and are ignored. Aside from SQL function names, SQL really only performs the equivalent of the common language runtime (CLR) Convert. That is, SQL can change the value of one type to another. There is no equivalent of CLR cast because there is no concept of reinterpreting the same bits as those of another type. That is why a C# cast works only locally. It is not remoted. The operators, is and as, and the GetType method are not restricted to the Select operator. They can be used in other query operators also. SQL Server 2008 SupportStarting with the .NET Framework 3.5 SP1, LINQ to SQL supports mapping to new date and time types introduced with SQL Server 2008. But, there are some limitations to the LINQ to SQL query operators that you can use when operating against values mapped to these new types. Unsupported Query OperatorsThe following query operators are not supported on values mapped to the new SQL Server date and time types: DATETIME2, DATE, TIME, and DATETIMEOFFSET.
For more information about mapping to these SQL Server date and time types, see SQL-CLR Type Mapping. SQL Server 2005 SupportLINQ to SQL does not support the following SQL Server 2005 features:
SQL Server 2000 SupportThe following SQL Server 2000 limitations (compared to Microsoft SQL Server 2005) affect LINQ to SQL support. Cross Apply and Outer Apply OperatorsThese operators are not available in SQL Server 2000. LINQ to SQL tries a series of rewrites to replace them with appropriate joins. Cross Apply and Outer Apply are generated for relationship navigations. The set of queries for which such rewrites are possible is not well defined. For this reason, the minimal set of queries that is supported for SQL Server 2000 is the set that does not involve relationship navigation. text / ntextData types text / ntext cannot be used in certain query operations against varchar(max) / nvarchar(max), which are supported by Microsoft SQL Server 2005. No resolution is available for this limitation. Specifically, you cannot use Distinct() on any result that contains members that are mapped to text or ntext columns. Behavior Triggered by Nested QueriesSQL Server 2000 (through SP4) binder has some idiosyncrasies that are triggered by nested queries. The set of SQL queries that triggers these idiosyncrasies is not well defined. For this reason, you cannot define the set of LINQ to SQL queries that might cause SQL Server exceptions. Skip and Take OperatorsTake and Skip have certain limitations when they are used in queries against SQL Server 2000. For more information, see the "Skip and Take Exceptions in SQL Server 2000" entry in Troubleshooting. Object MaterializationMaterialization creates CLR objects from rows that are returned by one or more SQL queries.
See also
SamplesThis topic provides links to the Visual Basic and C# solutions that contain LINQ to SQL sample code. In This Section
Visual Basic version of the SampleQueries solution
C# version of the SampleQueries solution Follow these steps to find additional examples of LINQ to SQL code and applications:
See also
Source/Reference
©sideway ID: 201000023 Last Updated: 10/23/2020 Revision: 0 Ref: ![]() References
![]() Latest Updated Links
![]() ![]() ![]() ![]() ![]() |
![]() Home 5 Business Management HBR 3 Information Recreation Hobbies 8 Culture Chinese 1097 English 339 Travel 18 Reference 79 Computer Hardware 254 Software Application 213 Digitization 37 Latex 52 Manim 205 KB 1 Numeric 19 Programming Web 289 Unicode 504 HTML 66 CSS 65 SVG 46 ASP.NET 270 OS 431 DeskTop 7 Python 72 Knowledge Mathematics Formulas 8 Set 1 Logic 1 Algebra 84 Number Theory 206 Trigonometry 31 Geometry 34 Calculus 67 Engineering Tables 8 Mechanical Rigid Bodies Statics 92 Dynamics 37 Fluid 5 Control Acoustics 19 Natural Sciences Matter 1 Electric 27 Biology 1 |
Copyright © 2000-2025 Sideway . All rights reserved Disclaimers last modified on 06 September 2019