Windows 2012 Hosting - MVC 6 and SQL 2014 BLOG

Tutorial and Articles about Windows Hosting, SQL Hosting, MVC Hosting, and Silverlight Hosting

SQL 2014 Hosting - ASPHostPortal :: How to Optimize Your SQL Query

clock October 6, 2015 08:59 by author Jervis

Modern day software applications have millions of concurrent users. Development of an efficiently serviceable application requires a huge amount of effort and requires many tools and techniques. Software developers always try to improve the performance of the application by improving design, coding and database development. For database development, query optimization and evaluation techniques are playing vital parts.

Selection of required field only.

It is very important to avoid unnecessary data selection of the query. We should select a data field that we need but not all fields of the table.

SELECT login_id, pawwsord FROM tbluser  

Index

Properly created Indexes help to optimize search results. You need to better understand the databases before the selection of a better performing index. The selection of a highly used field as an index is very important.

CREATE clustered INDEX ind_login_id ON tbluser(login_id)  

Primary Key

The Primary Key is the most important index of the table. The most important thing about a Primary Key is the selection of a short and unique field. This will lead to easy access to the data records.

CREATE TABLE tbluser(
  id INT,  
  name VARCHAR(150),  
  email VARCHAR(100),  
  login_id VARCHAR(100),  
  password VARCHAR(10),  
  primary_key(id)  
)

Index unique column

The indexing of a unique column will improve searching and increase the efficiency of the database. You must have a better understanding of the data field and their utilization before indexing a unique column. The indexing of a less used column does not help improve the efficiency of the database.

CREATE INDEX ind_email ON tbluser(email)  

Select limited records

None of the user interfaces can visualize thousands of records at once. Hence there is no way to select all the records at once, so always limit the selection when you have a large number of records. Select the required data only.

SELECT id, name, email, login_id,password FROM tbluser WHERE 1 limite 10  

Selection of correct data type and length

Use the most appropriate data type and correct length of the data. The bad selection of a data type will produce bulky databases and poor performance. This will improve resource utilization of the database server.

CREATE TABLE tbluser(id INT,  
   name VARCHAR(150),  
   email VARCHAR(100),  
   login_id VARCHAR(100),  
   password VARCHAR(10)  
)  

Avoid in sub query

Always avoid use of IN sub-queries in your applications. An In sub-query will evaluate all the records of table A with table B (product of records) before selecting the required data.

SELECT login_id,name, email FROM tbluser WHERE login_id IN ( SELECT login_id FROM tbllogin_details)

One of the correct ways is to use an inner join as in the following:  

SELECT login_id,name, email FROM tbluser INNER JOIN tbllogin_details ON tbluser.login_id =tbllogin_details.login_id 

Avoid NOT operator

Please avoid the usage of the NOT operator situation that the number of qualifying records are lower than unqualified records. Always use a positive operator such as LIKE, EXIST than NOT LIKE, NOT EXIST.

SELECT * FROM tbluser WHERE email NOT LIKE '%gmail%'  

The prefered way is:

SELECT * FROM tbluser WHERE email LIKE '%yahoo%'  



SQL Hosting with ASPHostPortal :: Using SQLBulkCopy and C# to Upload File

clock August 12, 2015 08:17 by author Jervis

In this article I am going to write about SQLBulkCopy and its major properties and methods. This article will give you the code for high performance transfer of rows from XML file to SQL server with SQLBulkCopy and C#.

SQLBulkCopy introduced as part of .Net framework 2.0. It is simple and easy tool to transfer complicated or simple data from one data source to other. You can read data from any data source as long as that data can be load to DataTable or read by IDataReader and transfer the data with high performance to SQL Server using SQLBulkCopy.

In real time applications every day millions of records get transferred from one data store to other. There are multiple ways to transfer the data like command prompt bcp utility of SQL Server, creating INSERT statements, creating SSIS packages and SQLBulkCopy. SQLBulkCopy gives you significant performance gain over other tools.

SQLBulkCopy Constructor

SQLBulkCopy initializes instance in four different way.

1. Accepts already open SqlConnection for destination.
2. Accepts connection string of SQLConnection. This constructor actually opens and initializes new instance of SQLConnection for destination.
3. Accepts connection string of SQLconnection and enum value of SqlBulkCopyOptions. This constructor actually opens and initializes new instance of SQLConnection for destination.
4. Accepts already opened SQLConnection and enum value of SqlBulkCopyOptions.

SqlBulkCopy bulkCopy =
            new SqlBulkCopy(destinationConnection.ConnectionString, 
                SqlBulkCopyOptions.TableLock))

BatchSize

SQLBulkCopy BatchSize is integer property with default value of 0. It decides how many rows need to be send to the server in one batch. If you do not set any value for this property or set it as 0, all the records will be send in single batch.

Following example sets BatchSize property as 50.

bulkCopy.BatchSize = 50;

ColumnMappings

SQLBulkCopy ColumnMappings is a collection of columns which needs to be map from source table to destination table's columns. You do not need to map the columns if column names are same. However it is very important to map the columns if column names are different. If matching SQLBulkCopy does not found the matching column it throws System.InvalidOperationException.

You can map the columns in different ways, giving both column names is easy and readable method.

Below code match the column OrderID from source table with columnNewOrderID of destination column.

bulkCopy.ColumnMappings.Add("OrderID", "NewOrderID");  

Data Type issue while mapping the column

SqlBulkCopy is particular about matching column DataType. Both the columns has to be of same DataType. If you have nullable columns, you explicitly have to convert such columns into desired DataType.

Below code converts Null to varchar(2) and can be mapped to any varchar(2) column of destination table.

SELECT  CAST(ISNULL(ShipRegion,'') as varchar(2))
            as ShipRegion FROM Orders

Quick note: If you are having computed columns like SUM, AVG etc. make sure it returns in expected DataType. If your destination table expects columns with decimal(15,7) you will have to explicitly convert the source column as decimal(15,7) because SUM will by default return decimal(38,7).

DestinationTableName

It sets the name of destination table. The method WriteToServer will copy the source rows to this particular table.

Below code will set the destination table as "TopOrders".

bulkCopy.DestinationTableName = "TopOrders";   

NotifyAfter and SqlRowsCopied

NotifyAfter is an integer property with default value of 0 and SqlRowsCopied is an event. The value of NotifyAfter indicates when to raise eventSqlRowsCopied.

The below code shows after processing 100 rows, event SqlRowsCopied will be executed.

bulkCopy.SqlRowsCopied +=
    new SqlRowsCopiedEventHandler(OnSqlRowsTransfer);
bulkCopy.NotifyAfter = 100;

private static void
    OnSqlRowsTransfer(object sender, SqlRowsCopiedEventArgs e)
{
        Console.WriteLine("Copied {0} so far...", e.RowsCopied);
}

WriteToServer

WriteToServer is a method which actually processes your source table data to destination table. It accepts array of DataRows or DataTable or IDataReader. With DataTable you can also specify the state of the rows that needs to be processed.

The following code will process rows from sourceData DataTable which has RowState as Added to DestinationTable.

bulkCopy.WriteToServer(sourceData, DataRowState.Added);



SQL 2014 Hosting - ASPHostPortal.com :: Introduction In-Memory OLTP in SQL 2014

clock July 4, 2014 06:49 by author Jervis

Microsoft's new release of SQL Server 2014 comes pretty close on the heels of the last SQL Server 2012 release. For many organizations, this could be a hurdle to adoption, because upgrading core pieces of an IT infrastructure can be both costly and resource-intensive. However, SQL Server 2014 has several compelling new features that can definitely justify an upgrade. Here are the overview of SQL 2014 features:

1. New In-Memory OLTP Engine
2. Enhanced Windows Server 2012 Integration
3. Improvement in Business Intelligence
4. Office 365 Integration
5. Etc

In today post, I want to examine SQL Server 2014 In-Memory OLTP from different angles: how to start using it, provide directions for migration planning, review closely many of its limitations, discuss SQL 2014 In-Memory OLTP applicability and see where the SQL Server In-Memory OLTP can be an alternative to in-memory dynamic caching, and where it is complimentary.

The question now is what is Memory Online Transaction Processing?

SQL Server 2014’s biggest feature is definitely its In-Memory transaction processing, or in-memory OLTP, which Microsoft claims make database operations much faster. In-memory database technology for SQL Server has long been in the works under the code name “Hekaton”. Hekaton is a database engine component which is optimized for accessing Memory resident tables. This component is great, it is fully integrated into SQL 2014 database engine.

The Function of Memory Online Transaction Processing

Hekaton facilitates creation of Memory resident Tables (i.e. Memory Optimized Tables) and Indexes. Beside that, it also provide the option to compile the Transact-Sql Stored Procedure accessing Memory Optimized Tables to Machine code. With Memory Optimized Tables, it provide better performance as the core engine uses the lock free algorithm which doesn’t require any lock and latches when the Memory optimized tables are referenced during the transaction processing.

Low Cost Using In Memory OLTP

Sql Server database engine was designed in the days when Main Memory was very costly. As per this design data is stored on the disk and is loaded to the main memory as required for the transaction processing and any changes to the In-Memory data is written back to the disk. This disk IO is main bottle neck for the OLTP applications having huge number of concurrent users, as it involves waiting for locks to be released, latches to be available, waiting for the log writes to complete.

As per the current trend Main Memory prices are less expensive and enterprises can easily afford to have production database servers with Main Memory sizes in TB’s. And this declining Memory prices made Microsoft to re-think on the initial database engine which is designed in the days when Main Memory was costly. And the result of this re-think is the In-Memory OLTP (a.k.a. Hekaton) Database engine component which supports memory resident Tables and Index. In-Memory OLTP engine uses the lock free algorithm (i.e. MultiVersion Optimistic Concurrency Control) which doesn’t require any lock and latches when the Memory optimized tables are referenced during the transaction processing. And for supporting data durability it still writes to the transaction log but the amount of data which is written to the log is reduced considerably.

Migrating to SQL Server 2014 In-Memory OLTP

Migration to In-Memory OLTP has to be performed in a development environment and carefully tested. Your High-Availability design, Databases design, Tables schemas and data, stored procedures, business logic in the database and even application code – all may require many syntax changes to use In-Memory OLTP.

This is not a “click and migrate” process. It requires development cycles, application and database design and code changes.

The right way to use the In-Memory OLTP engine is:

  • Plan your production database architecture. The In-Memory OLTP is very different and has many limitations in terms of H/A, Mirroring, Replications available functionalities;
  • Plan carefully your new database (and possibly application) design;
  • Migrate several specific tables and procedures that are good benefit candidates;
  • Develop or change your business-logic to fit the new design;
  • Test and evaluate;
  • Deploy

To evaluate whether the In-Memory OLTP can improve your database performance, you can use Microsoft new AMR tool (Analysis, Migrate and Report). For helping with actual migration you can use the Memory Optimization Advisor for tables and the Native Compilation Advisor to help porting a stored procedure to a natively compiled stored procedure.

The AMR tool helps identifying the tables and stored procedures that would benefit by moving them into memory and also help performing the actual migration of those database objects. The AMR tool is installed when you select the “Complete” option of “Management Tools”, and is later accessed through SQL Server Management Studio (SSMS) in Reports  –>> Management Data Warehouse Transaction performance reports tool:

The AMR tool provides reports which tables and procedures can benefit the most from In-Memory OLTP and provide a hint how complex will be the conversion. The reports show either recommendations based on usage, contention and performance. Here is example (graphics may change in the GA release):

After you identify a table that you would like to port to use In-Memory OLTP, you can use the Memory-Optimization Advisor to help you migrate the disk-based database table to In-Memory OLTP. In SSMS Object Explorer, right click the table you want to convert, and select Memory-Optimization Advisor.

Conclusion

Above article is only brief information about one of new feature in SQL 2014. Want to try more? We have supported the latest SQL 2014 hosting on our hosting environment. Just take a look on our site for more information.



SQL 2014 Hosting with ASPHostPortal.com :: How to Get Data in Textbox from SQL 2014 Database in ASP.NET

clock February 19, 2014 11:42 by author Kenny

Learning SQL can be very rewarding. Once you have a basic understanding of SQL, you can start to develop more advanced websites, and you can (hopefully) charge more money for doing so!
Now, we will give you tutorial about how to get data from Database and that Database can be fetched as Label text or textbox text:

Basically, there are six step for this tutorial:

1. Design View

 

2. Source View

3. Code Behind

Here I've using Student database

4. Output After Debugging Before Giving Input Data:

5. Student Database 

6. Output 

 



About ASPHostPortal.com

We’re a company that works differently to most. Value is what we output and help our customers achieve, not how much money we put in the bank. It’s not because we are altruistic. It’s based on an even simpler principle. "Do good things, and good things will come to you".

Success for us is something that is continually experienced, not something that is reached. For us it is all about the experience – more than the journey. Life is a continual experience. We see the Internet as being an incredible amplifier to the experience of life for all of us. It can help humanity come together to explode in knowledge exploration and discussion. It is continual enlightenment of new ideas, experiences, and passions


Author Link


Corporate Address (Location)

ASPHostPortal
170 W 56th Street, Suite 121
New York, NY 10019
United States

Sign in