Windows 2012 Hosting - MVC 6 and SQL 2014 BLOG

Tutorial and Articles about Windows Hosting, SQL Hosting, MVC Hosting, and Silverlight Hosting

SQL 2014 Hosting - ASPHostPortal :: How to Optimize Your SQL Query

clock October 6, 2015 08:59 by author Jervis

Modern day software applications have millions of concurrent users. Development of an efficiently serviceable application requires a huge amount of effort and requires many tools and techniques. Software developers always try to improve the performance of the application by improving design, coding and database development. For database development, query optimization and evaluation techniques are playing vital parts.

Selection of required field only.

It is very important to avoid unnecessary data selection of the query. We should select a data field that we need but not all fields of the table.

SELECT login_id, pawwsord FROM tbluser  

Index

Properly created Indexes help to optimize search results. You need to better understand the databases before the selection of a better performing index. The selection of a highly used field as an index is very important.

CREATE clustered INDEX ind_login_id ON tbluser(login_id)  

Primary Key

The Primary Key is the most important index of the table. The most important thing about a Primary Key is the selection of a short and unique field. This will lead to easy access to the data records.

CREATE TABLE tbluser(
  id INT,  
  name VARCHAR(150),  
  email VARCHAR(100),  
  login_id VARCHAR(100),  
  password VARCHAR(10),  
  primary_key(id)  
)

Index unique column

The indexing of a unique column will improve searching and increase the efficiency of the database. You must have a better understanding of the data field and their utilization before indexing a unique column. The indexing of a less used column does not help improve the efficiency of the database.

CREATE INDEX ind_email ON tbluser(email)  

Select limited records

None of the user interfaces can visualize thousands of records at once. Hence there is no way to select all the records at once, so always limit the selection when you have a large number of records. Select the required data only.

SELECT id, name, email, login_id,password FROM tbluser WHERE 1 limite 10  

Selection of correct data type and length

Use the most appropriate data type and correct length of the data. The bad selection of a data type will produce bulky databases and poor performance. This will improve resource utilization of the database server.

CREATE TABLE tbluser(id INT,  
   name VARCHAR(150),  
   email VARCHAR(100),  
   login_id VARCHAR(100),  
   password VARCHAR(10)  
)  

Avoid in sub query

Always avoid use of IN sub-queries in your applications. An In sub-query will evaluate all the records of table A with table B (product of records) before selecting the required data.

SELECT login_id,name, email FROM tbluser WHERE login_id IN ( SELECT login_id FROM tbllogin_details)

One of the correct ways is to use an inner join as in the following:  

SELECT login_id,name, email FROM tbluser INNER JOIN tbllogin_details ON tbluser.login_id =tbllogin_details.login_id 

Avoid NOT operator

Please avoid the usage of the NOT operator situation that the number of qualifying records are lower than unqualified records. Always use a positive operator such as LIKE, EXIST than NOT LIKE, NOT EXIST.

SELECT * FROM tbluser WHERE email NOT LIKE '%gmail%'  

The prefered way is:

SELECT * FROM tbluser WHERE email LIKE '%yahoo%'  



SQL Hosting with ASPHostPortal :: Using SQLBulkCopy and C# to Upload File

clock August 12, 2015 08:17 by author Jervis

In this article I am going to write about SQLBulkCopy and its major properties and methods. This article will give you the code for high performance transfer of rows from XML file to SQL server with SQLBulkCopy and C#.

SQLBulkCopy introduced as part of .Net framework 2.0. It is simple and easy tool to transfer complicated or simple data from one data source to other. You can read data from any data source as long as that data can be load to DataTable or read by IDataReader and transfer the data with high performance to SQL Server using SQLBulkCopy.

In real time applications every day millions of records get transferred from one data store to other. There are multiple ways to transfer the data like command prompt bcp utility of SQL Server, creating INSERT statements, creating SSIS packages and SQLBulkCopy. SQLBulkCopy gives you significant performance gain over other tools.

SQLBulkCopy Constructor

SQLBulkCopy initializes instance in four different way.

1. Accepts already open SqlConnection for destination.
2. Accepts connection string of SQLConnection. This constructor actually opens and initializes new instance of SQLConnection for destination.
3. Accepts connection string of SQLconnection and enum value of SqlBulkCopyOptions. This constructor actually opens and initializes new instance of SQLConnection for destination.
4. Accepts already opened SQLConnection and enum value of SqlBulkCopyOptions.

SqlBulkCopy bulkCopy =
            new SqlBulkCopy(destinationConnection.ConnectionString, 
                SqlBulkCopyOptions.TableLock))

BatchSize

SQLBulkCopy BatchSize is integer property with default value of 0. It decides how many rows need to be send to the server in one batch. If you do not set any value for this property or set it as 0, all the records will be send in single batch.

Following example sets BatchSize property as 50.

bulkCopy.BatchSize = 50;

ColumnMappings

SQLBulkCopy ColumnMappings is a collection of columns which needs to be map from source table to destination table's columns. You do not need to map the columns if column names are same. However it is very important to map the columns if column names are different. If matching SQLBulkCopy does not found the matching column it throws System.InvalidOperationException.

You can map the columns in different ways, giving both column names is easy and readable method.

Below code match the column OrderID from source table with columnNewOrderID of destination column.

bulkCopy.ColumnMappings.Add("OrderID", "NewOrderID");  

Data Type issue while mapping the column

SqlBulkCopy is particular about matching column DataType. Both the columns has to be of same DataType. If you have nullable columns, you explicitly have to convert such columns into desired DataType.

Below code converts Null to varchar(2) and can be mapped to any varchar(2) column of destination table.

SELECT  CAST(ISNULL(ShipRegion,'') as varchar(2))
            as ShipRegion FROM Orders

Quick note: If you are having computed columns like SUM, AVG etc. make sure it returns in expected DataType. If your destination table expects columns with decimal(15,7) you will have to explicitly convert the source column as decimal(15,7) because SUM will by default return decimal(38,7).

DestinationTableName

It sets the name of destination table. The method WriteToServer will copy the source rows to this particular table.

Below code will set the destination table as "TopOrders".

bulkCopy.DestinationTableName = "TopOrders";   

NotifyAfter and SqlRowsCopied

NotifyAfter is an integer property with default value of 0 and SqlRowsCopied is an event. The value of NotifyAfter indicates when to raise eventSqlRowsCopied.

The below code shows after processing 100 rows, event SqlRowsCopied will be executed.

bulkCopy.SqlRowsCopied +=
    new SqlRowsCopiedEventHandler(OnSqlRowsTransfer);
bulkCopy.NotifyAfter = 100;

private static void
    OnSqlRowsTransfer(object sender, SqlRowsCopiedEventArgs e)
{
        Console.WriteLine("Copied {0} so far...", e.RowsCopied);
}

WriteToServer

WriteToServer is a method which actually processes your source table data to destination table. It accepts array of DataRows or DataTable or IDataReader. With DataTable you can also specify the state of the rows that needs to be processed.

The following code will process rows from sourceData DataTable which has RowState as Added to DestinationTable.

bulkCopy.WriteToServer(sourceData, DataRowState.Added);



SQL Hosting - ASPHostPortal :: How to Schedule Backup with SQL Server Express

clock December 14, 2012 09:31 by author Jervis

Microsoft’s SQL Server Express is a fantastic product for anyone needing a relational database on a limited budget. Server Express is free but it comes with a few limitations such as only utilizing 1 GB of RAM,  databases are limited to 10 GB, and it does not include SQL Profiler. For low volume sites that do not need enterprise level capabilities, this is a compelling solution. Here is a complete SQL Server feature comparison of all the SQL Server editions.

There is one other important limitation that needs to be mentioned which is that SQL Server Express does not include SQL Agent. This means that you can not schedule administrative tasks such as backing up your databases.  As everyone knows backing up data is critical and with SQL Server Express you can still backup your databases but you have to do it manually. So what does one do if you don’t have a budget to license SQL Server  but you need scheduled backups of your databases?

The answer is SQL Scheduler. SQL Scheduler is a simple lightweight application that installs on your server and runs as a service. Using the application’s GUI you can create multiple jobs to run your backups on a predefined schedule and you can even configure it to send you an email notification if it completes or fails to run properly.

After you download the program, unzip the archive and run InstallService.bat file from the command prompt. The installation will complete quickly.

Once you launch the program you just need to connect to your local SQL Server Express instance on your server. Here is how it will appear without any jobs configured.

From the File menu create a new job. In this example we’ll create 1 job to take a full backup of each database that is configured on the server.

Click on the Details tab and you’ll be able to enter your SQL statements for whatever task you’re trying to accomplish. Be sure to edit the script and change the path of where you want the database backups stored on your server.

On the schedule tab you can define the time and date of your job will run.

To have the job send you an email upon completion or in the event of a failure open the SQLScheduler.WindowsService.exe.config file and enter the mail server address along with the and username and password for authentication.

<system.net>
  <mailSettings>
    <smtp deliveryMethod="Network" from=”[email protected]”>
      <network host="locahost" userName="x" password="x” />
    </smtp>
  </mailSettings>
</system.net>

Once the job runs a full backup will be created for each database and it will be stored in a subfolder of the path you specified in the script.
Now that your daily full backups are being created locally on the server the next step would be to either configure Windows Server Backup or configure 3rd party online backup solution.

 



SQL 2012 Hosting - ASPHostPortal :: Tabular Modeling in SQL Server 2012

clock September 24, 2012 07:44 by author Jervis

Tabular model is a new database structure in SQL Server 2012 Analysis Services. When you create tabular model project, SQL Server Developer Tools creates a Model.bim file to the project and creates workspace database in Analysis Services instance. Workspace database acts as a temporary storage for data when you develop a model in SSDT.

Workspace Database

Workspace Database stores the data in memory , SSDT retrieves the data from Workspace database when you view data in the diagram view.


The below are the Model.bim properties which can use for setting values

-
Workspace Server – use this property to set the value for server where you want to host the database.
-
Workspace Database – You can not assign value, Analysis Services assigns a name to it.

You must be an administrator for the analysis services instance hosting the workspace database.

Table Import Wizard

You can import the data into model using Table Import Wizard from one or more data sources. If you are using a relational data source then you can select from list of tables and views. You can filter the data before you importing into the model.



Table Model Designer


After importing the data into the model, designer shows the data as shown below.




When you import the data from relational datasource, process identifies the relationships and adds them to the model. To view the models switch to the diagram view




You can add new relationship by clicking a column in one table and dragging the cursor to the corresponding column in the second table.


Calculated Columns

Using Data Analysis Expression (DAX) formula, you can concatenate values from two columns into a single column.




Adding a Measure to model

To add a new measure , click the cell in calculation area then type the DAX formula in bar.




More about the using Tabular Model in project can be read
here.

Reasons why you must trust ASPHostPortal.com

Every provider will tell you how they treat their support, uptime, expertise, guarantees, etc., are. Take a close look. What they’re really offering you is nothing close to what ASPHostPortal does. You will be treated with respect and provided the courtesy and service you would expect from a world-class web hosting business.

You’ll have highly trained, skilled professional technical support people ready, willing, and wanting to help you 24 hours a day. Your web hosting account servers are monitored from three monitoring points, with two alert points, every minute, 24 hours a day, 7 days a week, 365 days a year. The followings are the list of other added- benefits you can find when hosting with us:


-
DELL Hardware
Dell hardware is engineered to keep critical enterprise applications running around the clock with clustered solutions fully tested and certified by Dell and other leading operating system and application providers.
- Recovery Systems
Recovery becomes easy and seamless with our fully managed backup services. We monitor your server to ensure your data is properly backed up and recoverable so when the time comes, you can easily repair or recover your data.

- Control Panel
We provide one of the most comprehensive customer control panels available. Providing maximum control and ease of use, our Control Panel serves as the central management point for your ASPHostPortal account. You’ll use a flexible, powerful hosting control panel that will give you direct control over your web hosting account. Our control panel and systems configuration is fully automated and this means your settings are configured automatically and instantly.

- Excellent Expertise in Technology
The reason we can provide you with a great amount of power, flexibility, and simplicity at such a discounted price is due to incredible efficiencies within our business. We have not just been providing hosting for many clients for years, we have also been researching, developing, and innovating every aspect of our operations, systems, procedures, strategy, management, and teams. Our operations are based on a continual improvement program where we review thousands of systems, operational and management metrics in real-time, to fine-tune every aspect of our operation and activities. We continually train and retrain all people in our teams. We provide all people in our teams with the time, space, and inspiration to research, understand, and explore the Internet in search of greater knowledge. We do this while providing you with the best hosting services for the lowest possible price.

- Data Center

ASPHostPortal modular Tier-3 data center was specifically designed to be a world-class web hosting facility totally dedicated to uncompromised performance and security
- Monitoring Services
From the moment your server is connected to our network it is monitored for connectivity, disk, memory and CPU utilization – as well as hardware failures. Our engineers are alerted to potential issues before they become critical.

- Network
ASPHostPortal has architected its network like no other hosting company. Every facet of our network infrastructure scales to gigabit speeds with no single point of failure.

- Security
Network security and the security of your server are ASPHostPortal’s top priorities. Our security team is constantly monitoring the entire network for unusual or suspicious behavior so that when it is detected we can address the issue before our network or your server is affected.

- Support Services
Engineers staff our data center 24 hours a day, 7 days a week, 365 days a year to manage the network infrastructure and oversee top-of-the-line servers that host our clients’ critical sites and services.

 



SQL 2012 Hosting - ASPHostPortal :: Integration Services Catalog in SQL Server 2012

clock September 8, 2012 06:07 by author Jervis

Integration Services Catalog is a new feature in SQL Server 2012 which supports the centralization of storage of packages and configuration files. You can host only one catalog for one SQL Server Instance. When you deploy your project using project deployment model all project components stored in catalog. This post describes how to create a catalog and set the properties.



Catalog creation

1. In SQL Server Management Studio , right click the Integration Services Catalogs folder and Select Create Catalog.


2. In Create Catalog dialog box, Select Enable Automatic execution of Integration Services Stored Procedure.

3. Catalog database name can not be changed and it is set to default name SSISDB. Provide the password and click ok as shown below.



Integration services uses this password to encrypt sensitive data stored in catalog.


Catalog Properties

To set the properties , right click SSISDB and select properties and you will get the properties dialogue box as shown below



The default encryption algorithm is AES_256. Operations include activities such as project deployment, project validation and project execution. Integration Services stores this information in catalog. You can see the active operations in catalog by right clicking SSISDB database.

Project Versioning

Each time you re-deploy a project with the same name to same folder 10 versions are maintained in list.

-
Locate the project under SSISDB database, right click on it and select versions.
-
In the project versions dialogue box, select the version to restore and click restore button



click yes to confirm, and then click OK to close the message box.

 



SQL 2012 Hosting - ASPHostPortal :: SSRS with SQL Server 2012 Denali and Sharepoint 2010

clock July 27, 2012 06:20 by author Jervis

In the new SQl server 2012 Denali SSRS is integrated in SharePoint 2010 as a Service application. This has few benefits over the old SSRS

- Easy deployment of SSRS reports in SharePoint 2010 server.

- Since SSRS is an Service application and now has its own database (like any other service application database).
- Administrators can now perform all the day-to-day activities like backup\restore of SSRS applictaion database, checking uls logs using Powershell etc.
- Supports WCF and Claims based Communication, built-in Scale with load balancing, PowerShell commandlets and cross-farm report consumption.
- Alerting is a new capability that has been introduced to Denali Reporting Services. You can now setup alert rules and be alerted when report data changes occur that match a set of rules.
- Denali is using Visual Studio 2010 so new reports can be build in Visual studio 2010 and published to SharePoint

Once you have configured SSRS service application you can use various available tools to create and Publish interactive reports in SharePoint 2010. Lets look at some of these tools


Report Builder 3.0
- Report Builder 3.0 is a tool which is Integrated in Denali and is used for creating and Publishing Interactive reports to a Reporting server like SharePoint.You can quickly start building reports in Report Builder by creating and publishing report parts,shared dataset, Creating Table, Matrix, Chart wizard or using Map wizard to create aggregated data reports into the SharePoint server.

Report Builder and SharePoint 2010
- When SSRS is integrated with SharePoint “Report Builder” is available as a Content type. You can create a library and start using Report Builder as a content type in your library.

Creating a new report with Report builder Content Type
– Once you add the “Report builder” content type to your library you can simple launch the Report builder tool\wizard by New Document item menu on the Ribbon and select “Report Builder Report”.

Power View or Project Crescent
- Power View is feature of Microsoft SQL Server 2012 Denali which serves as an Add-in for Microsoft SharePoint Server 2010 Enterprise Edition to build reports with an excellent data exploration, visualization, and presentation. Project Crescent runs in the browser and is working against a BI Semantic Model (BISM). BI Semantic Model is available on the client through the Excel based Power pivot modeling tool or on the server through Analysis Services Tabular project with SQL Server Denali.

Check Out the System Requirements for Power View Here.

Lets look at some of the advantages of using this new tool -


- Power View is a thin web client that launches right in the browser from a tabular model in SharePoint Server 2010.

- In Power View you’re working with the real data.
- You can simply drag-drop various controls to Power View designer and create presentable reports in minutes.
- In Power View, you can quickly create a variety of visualizations, from tables and matrices to bubble charts and sets of small multiple charts.
- Project Crescent is optimized for design and online screen consumption and interactivity.

Now, you can get all new features in SQL 2012 Hosting with us. Click
here to visit our site.



SQL 2012 Hosting - ASPHostPortal :: New T-SQL Features in SQL Server 2012

clock July 17, 2012 09:17 by author Jervis

Paging Data

Paging is a big issue for developers as it is required for many applications but entails many performance problems. Developers have used different workarounds to support paging.

For example, assume we need to display several pages of the HumanResource.Employee object with 5 rows per page. Below is the query you need to execute this in SQL Server 2012.

DECLARE @page_index SMALLINT ,
@page_Size SMALLINT = 5,
@offset INT
SET @page_index = 2
SET @offset = ( (@page_index - 1) * @page_Size)
SELECT   BusinessEntityID,
         NationalIDNumber,
         JobTitle
FROM HumanResources.Employee
ORDER BY BusinessEntityID

OFFSET @offset ROWS
FETCH NEXT @page_Size ROWS ONLY

In this query you will note that there are two additional syntaxes added to the well-known SELECT query.

OFFSET <offset> ROWS

The number of rows to exclude from the top.

FETCH NEXT <page_size> ROWS ONLY

The number of rows to display.

So in the results you will only receive the data you need.



You must remember that ORDER BY clause is a must, you can have multiple columns for the ORDER BY clause.


SELECT SOH.SalesOrderID

       ,SOD.SalesOrderDetailID
       ,P.Name As ProductNam
       FROM
Sales.SalesOrderDetail SOD INNER JOIN Sales.SalesOrderHeader SOH

ON SOD.SalesOrderID = SOH.SalesOrderID

INNER JOIN Production.Product P ON P.ProductID = SOD.ProductID

ORDER BY SOH.SalesOrderID,SOD.SalesOrderDetailID DESC

OFFSET 100 ROWS

FETCH NEXT 25 ROWS ONLY


The query plan for this will appear as below:




As you can see, the Top operator will be introduced at an almost zero percent cost.


Lag & Lead

This is another T-SQL feature added to SQL Server 2012 addresses an issue in reporting.


Let me describe this feature with a real world example.

Below are the sales amounts for one month.



The above data was generated from the FactInternetSales table in the AdventureWorksDWDenali database. You can find the same script in the 5528EN _02_03.sql file in the code folder.


For reports you need this month’s sales and last month’s sales along with the difference between the two months.


If you were asked to write a query for this, you query would look like:


SELECT

 t1.Year,
 t1.Month,
 t1.SalesAmount as SalesAmountThisMonth,
 t2.SalesAmount as SalesAmountLastMonth,
 t1.SalesAmount - t2.SalesAmount SalesDifferent
FROM MonthlySales as t1

LEFT OUTER JOIN MonthlySales as t2

ON (t1.Year = t2.Year) AND (t1.Month = t2.Month+1)

ORDER BY Year, Month


You need to use LEFT OUTER JOIN as INNER JOIN will not give any rows for the very first month in the data set.


Obviously, there will be a either an index scan or a table scan (depending whether you have an index or

not) twice for the table which is costly.

With SQL Server 2012, the new T-SQL function LAG is introduced just to address the above scenario:


SELECT

 Year,
 Month,
 SalesAmount as SalesAmountThisMonth,
 LAG(SalesAmount, 1, NULL) OVER(ORDER BY Year, Month) as SalesAmountLastMonth,
 SalesAmount - LAG(SalesAmount, 1, 0) OVER(ORDER BY Year, Month) SalesDifferent
FROM MonthlySales


As shown in the above example, the
LAG analytical function has three parameters. The first parameter is the expression which is the column in this scenario. The second parameter is the offset, since we are looking for last month this parameter is 1. If you want the last year’s comparative month’s data to be displayed then this parameter should be 12. The third parameter is the default, in case there is no match for the previous month, default value will be displayed.

So the output of the above query will be:




So how does this new syntax compare with the pre-2012 syntax? The simple way to compare them is running both queries in one batch and check the execution plan as shown in the below image. In the batch, the first query is the old style query and second query is the one using the
LAGfunction.



Relative cost wise, the first query (old style LEFT OUTER JOIN) cost is 77% whereas the query which contains
LAG cost is 23%. These queries were analyzed for Reads and duration using the SQL Profiler.

Below are the results for both queries executed after clearing the cache.


Reads

Duration (ms)

LEFT OUTER JOIN

125

8

LAG

46

3

Obviously using LAG has a clear performance advantage. However, to achieve better performance you need to cover those columns (in the above scenario, it will be Year and Month).

You can also use the
LAG function for grouping. For example, if you want product model sales differenciated by name you could use the below query.

SELECT

 ModelName,
 Year,
 Month,
 SalesAmount as SalesAmountThisMonth,
 LAG(SalesAmount, 1, NULL) OVER(PARTITION BY ModelName ORDER BY Year, Month) as SalesAmountLastMonth,
 SalesAmount - LAG(SalesAmount, 1, 0) OVER(PARTITION BY ModelName ORDER BY Year, Month) SalesDifferent
FROM ModelNameWiseMonthlySales


And the result is:




From the above results, you will see that the Sales Amount for the previous month is taken only for the model name.


LEAD
is another new function which is similar to LAGbut instead shows the future records.

SELECT

 Year,
 Month,
 SalesAmount as SalesAmountThisMonth,
 LAG(SalesAmount, 1, NULL) OVER(ORDER BY Year, Month) as SalesAmountLastMonth,
 LEAD(SalesAmount, 1, NULL) OVER(ORDER BY Year, Month) as SalesAmountNextMonth
FROM MonthlySales


As you can see from the below screenshot, the above query will give you the previous month’s and next month’s sales values in a single query.




FIRST_VALUE & LAST_VALUE


FIRST_VALUE
will return the first value while LAST_VALUE will return the last in an ordered set of values in SQL Server 2012.

Take for example, the below query.


SELECT

d.LastName,

d.GroupName,

e.JobTitle,

e.HireDate,

FIRST_VALUE(e.HireDate) OVER (PARTITION BY d.GroupName  ORDER BY e.JobTitle)
AS FirstValue,

LAST_VALUE(e.HireDate) OVER (PARTITION BY d.GroupName  ORDER BY e.JobTitle)
AS LastValue

 FROM HumanResources.Employee e
INNER JOIN HumanResources.vEmployeeDepartmentHistory d

ON e.BusinessEntityID = d.BusinessEntityID


This query will join the HumanResources.Employee table and the HumanResources.vEmployeeDepartmentHistory view together via BusinessEntityID. You can see that LastName, JobTitle, GroupName and Hiredate are selected from either the table or the view.


If you closely look at the
LAST_VALUE and the FIRST_VALUE function, you will observe that the Partition column and the order by column (Which are HireDate and GroupName respectively) are the same.

FIRST_VALUE
will give you the first value of the hiredate for each group when it orders by job title.

The results are given below:




CUME_DIST / PERCENT_RANK

CUME_DIST
and PERCENT_RANK can be used to calculate the relative rank of a row within a group of rows in SQL Server 2012.

Use PERCENT_RANK to evaluate the relative standing of a value within a query result set or partition.

PERCENT_RANK = (RANK() – 1) /

(TotalRows – 1)


CUME_DIST calculates the relative position of a specified value in a group of values.


The below example displays the Rank,
PERCENT_RANK and CUME_DIST

SELECT SalesOrderID,
 RANK() OVER(ORDER BY SalesOrderID) RANK,
 PERCENT_RANK() OVER(ORDER BY SalesOrderID) AS PERCENT_RANK,
 CUME_DIST() OVER(ORDER BY SalesOrderID) AS CUME_DIST
 FROM Sales.SalesOrderDetail
 WHERE SalesOrderID IN (75121,75122,75123 )

The result for the above query is shown below:



IIF

I’m not sure why the
IIF function which is a very simple function has not been included in previous versions of SQL Server. IIF is similar to the CASE function but it is optimized for evaluating two values. For example:

SELECT PersonType,

IIF(PersonType = 'EM','Employee','Other') PersonType

FROM Person.Person


The above query will return an Employee for the rows where PersonType equals EM and Other if not.


CHOOSE

This new simple feature is useful for extracting data elements from a data array:


DECLARE @MonthNumber int = 5
SELECT CHOOSE(@MonthNumber, 'JAN','FEB','MAR','APR','MAY','JUN','JUL','AUG','SEP','OCT','NOV','DEC') AS Month

The above will return the 5th element from the above array which is MAY.

TRY_CONVERT

The CONVERT function was used for earlier versions of SQL Server.

Let us say, we have a table with dates.

INSERT INTO MyData

(MyDate)

VALUES

('2011-01-01'),

('2012-12-31'),

('2012-02-29'),

('2011-02-29')


Note that 2011-02-29 is an invalid date. You could use
CONVERT function to convert varchar to the date data type:

SELECT CONVERT(date,MyDate) Mydate

FROM MyData

However, will return an error since you have an error in the last row. In SQL Server 2012, you can use the new feature,
TRY_CONVERT.

SELECT TRY_CONVERT(date,MyDate) Mydate

FROM MyData

TRY_CONVERT will return all the correct rows and invalid row will return null value. Main important thing is this won’t trigger an exception which will fail.

Apart from this you can use TRY_CONVERT function to retrieve only the error records.

SELECT MyDate

FROM MyData

WHERE TRY_CONVERT(date,Mydate) IS NULL


Instead of throwing an error TRY_CONVERT returns a NULL value and you can then more easily handle this in your code.


CONCAT

CONCAT is a significant addition for DQL Server developers.


If you have table with Title, FirstName, MiddleName, LastName columns and you may need to combine these columns and display them as a single column as shown below.

SELECT Title,FirstName,MiddleName, LastName ,
Title + ' '+ FirstName+ ' '+MiddleName+ ' '+LastName FullName
FROM Person.Person

Result are shown below:



However, except for three records and all the rows are null. That is because when a NULL value is added to a value the result is NULL.


The solution is to utilize the ISNULL function.


SELECT Title,FirstName,MiddleName, LastName ,

ISNULL(Title,'') + ' '+ FirstName+ ' '+ ISNULL(MiddleName, '' )+ ' '+LastName FullName

FROM Person.Person


However, this might be a tedious task if you have numerous columns to combine. In SQL Server 2012, you have a new function CONCAT which returns a string that is the result of concatenating two or more string values.


SELECT Title,FirstName,MiddleName, LastName ,

CONCAT(Title, ' ',FirstName, ' ',MiddleName, ' ',LastName) FullName

FROM Person.Person

From the below results you can see the null issue is eliminated in the FullName column.




Note that the
CONCAT function has no performance impact compared with the old-style syntax.

PARSE & TRY_PARSE

PARSE is somewhat similar to CONVERT in that it can convert one value to another specified value, however it should only be used for converting from a string to a date/time or number type. Note that since this is a CLR function it may have some performance impact.


SELECT PARSE('11/28/2011' AS datetime)
GO
SELECT PARSE('Monday, 28 November 2011' AS datetime USING 'en-US')
GO

As with TRY_CONVERT there is a function called TRY_PARSE to facilitate errors – ie it returns a NULL instead of throwing an error.

Date Time Functions

There are quite few date time functions introduced as shown in the below code. All the below functions will take parameters and generate datetime values from (except for DATEPART which outputs a part of a date as an integer).

SELECT
  DATEFROMPARTS(2012, 01, 31)   AS DATE_FROM_PARTS,
  DATETIME2FROMPARTS(2012, 01, 31, 15, 24, 5, 2, 7)  AS DATETIME2_FROM_PARTS,
  DATETIMEFROMPARTS(2012, 01, 31, 15, 30, 5, 997)   AS DATETIME_FROM_PARTS,
  DATETIMEOFFSETFROMPARTS(2012, 01, 31, 15, 30, 5, 1, -8, 0, 7)   AS DATETIMEOFFSET_FROM_PARTS,
  SMALLDATETIMEFROMPARTS(2012, 01, 31, 15, 30)   AS SMALLDATETIME_FROM_PARTS,
  TIMEFROMPARTS(15, 30, 5, 1, 7)    AS TIME_FROM_PARTS

All above functions will add different date parts and outputs will be given as a datetime.



EOMonth


In many human resource applications, you are required to get last date of the month. In previous versions of SQL Server, you might need to use following syntax.


SELECT

CAST(DATEADD(d,-1,(DATEADD(mm,DATEDIFF(m,0,GETDATE())+1,0)))AS DATE)

In SQL Server 2012 you can simply use EOMONTH to perform this:


SELECT EOMONTH(GETDATE())

There will not be any performance gain or loss since this function and the old-style function will both have same performance impact.



SQL 2012 Hosting - ASPHostPortal :: Restore Database Enhancements in SQL 2012

clock June 11, 2012 08:46 by author Jervis

In SQL 2012, Microsoft has introduced some nice restore database enhancements.
The major enhancements are:


1. Point-in-time restore has now a visual timeline that allows you to quickly select the target time and perform your restore.

2. Page Restore worked already in SQL 2008 (R2) and SQL 2005 but it has now a nice user interface. It allows you to check your database for corrupt pages and restore them from a good backup file.


In this blog, I’ll give you an overview how to use these 2 new features.

Point-in-time restore

In the Object Browser of your SSMS, right click on Databases and select “Restore Database”

In this example, I will perform a restore of the Adventureworks2008R2 database. I selected Device to get my backup files. Just press the […] button



In the locate backup file window, I select all the backup files (Full backups and Transaction Logs) that have been made. To create the backups, I just created a simple Maintenance Plan.

Click on OK.



Now all the backup sets are in the list (this is not new…). As you can see, there is a new button called “Timeline”. Click on it to open the timeline interface.




Now, you can choose to restore to the last backup taken or choose a specific date and time. With the timeline, you can scroll to the restore time that you want. On the timeline you can also see what types of backups will be used to perform the restore. Once you selected the correct time, just press the OK button.




Now press OK again, to start your restore. A restore plan is automatically generated and your database is restored till the requested time.






Page Restore

To perform a page restore,I first need to have a corrupt database and you also need to have a GOOD backup file,which means, without the corrupt page.
As you can see below, I did a DBCC checkdb and my database is indeed corrupt.



Let’s fix this database!

Right click on your DB, select Tasks – Restore – Page



In the Restore Page window, the database is selected and the Pages grid is automatically showing the damaged pages. You can also run DBCC CHECKDB, by clicking on the button “Check Database Pages”, to find out if there are more damaged pages in the database. You also need to set the location for the Tail-Log backup file. The Backup sets grid shows you all the backups that can be used to fix your pages.






Just click on the OK button to start the page restore




When I check my database again with DBCC CHECKDB I see that the damaged page has been fixed




I think those 2 new features will make the life of the DBA just a little bit easier


Reasons why you must trust ASPHostPortal.com

Every provider will tell you how they treat their support, uptime, expertise, guarantees, etc., are. Take a close look. What they’re really offering you is nothing close to what
ASPHostPortal does. You will be treated with respect and provided the courtesy and service you would expect from a world-class web hosting business.

You’ll have highly trained, skilled professional technical support people ready, willing, and wanting to help you 24 hours a day. Your web hosting account servers are monitored from three monitoring points, with two alert points, every minute, 24 hours a day, 7 days a week, 365 days a year. The followings are the list of other added- benefits you can find when hosting with us:

- DELL Hardware
Dell hardware is engineered to keep critical enterprise applications running around the clock with clustered solutions fully tested and certified by Dell and other leading operating system and application providers.
- Recovery Systems
Recovery becomes easy and seamless with our fully managed backup services. We monitor your server to ensure your data is properly backed up and recoverable so when the time comes, you can easily repair or recover your data.

- Control Panel
We provide one of the most comprehensive customer control panels available. Providing maximum control and ease of use, our Control Panel serves as the central management point for your ASPHostPortal account. You’ll use a flexible, powerful hosting control panel that will give you direct control over your web hosting account. Our control panel and systems configuration is fully automated and this means your settings are configured automatically and instantly.

- Excellent Expertise in Technology
The reason we can provide you with a great amount of power, flexibility, and simplicity at such a discounted price is due to incredible efficiencies within our business. We have not just been providing hosting for many clients for years, we have also been researching, developing, and innovating every aspect of our operations, systems, procedures, strategy, management, and teams. Our operations are based on a continual improvement program where we review thousands of systems, operational and management metrics in real-time, to fine-tune every aspect of our operation and activities. We continually train and retrain all people in our teams. We provide all people in our teams with the time, space, and inspiration to research, understand, and explore the Internet in search of greater knowledge. We do this while providing you with the best hosting services for the lowest possible price.

- Data Center

ASPHostPortal modular Tier-3 data center was specifically designed to be a world-class web hosting facility totally dedicated to uncompromised performance and security
- Monitoring Services
From the moment your server is connected to our network it is monitored for connectivity, disk, memory and CPU utilization – as well as hardware failures. Our engineers are alerted to potential issues before they become critical.

- Network
ASPHostPortal has architected its network like no other hosting company. Every facet of our network infrastructure scales to gigabit speeds with no single point of failure.

- Security
Network security and the security of your server are ASPHostPortal’s top priorities. Our security team is constantly monitoring the entire network for unusual or suspicious behavior so that when it is detected we can address the issue before our network or your server is affected.

- Support Services
Engineers staff our data center 24 hours a day, 7 days a week, 365 days a year to manage the network infrastructure and oversee top-of-the-line servers that host our clients’ critical sites and services.

 



SQL 2012 Hosting - ASPHostPortal :: Sequence in SQL Server 2012

clock May 25, 2012 07:55 by author Jervis

SQL Server 2012 (or Denali) has now arrived CTP. In this article I will look at a core new feature of SQL Server 2012 which is SEQUENCE. Well, if you are familiar with Oracle, you will already know all about this feature since it has been standard on Oracle more than 10 years I guess.

What is Sequence in SQL Server ?


In simple terms, it is a new database object and a substitute for the Identity of columns.


Using the identity attribute for a column, you can easily generate auto-incrementing numbers (which as often used as a primary key). With Sequence, it will be a different object which you can attach to a table column while inserting. Unlike identity, the next number for the column value will be retrieved from memory rather than from the disk – this makes Sequence significantly faster than Identity. We will see this in coming examples.


Creating a Sequence in SQL Server


To use Sequence first SQL Server Management Studio (SSMS) and expand the Object explorer, under programmability you will see the sequence node.




If you right click the sequence and select new, you will be taken to the below screen which has all the attributes for the sequence.




Since Sequence is a database object, it needs to be assigned to a schema. It has a data type which can be int, bigint, tinyint, smallint,numeric or decimal. The start value and increment as similar as to the values you will be familiar with using Identity.


The Minimum and maximum are boundaries for the sequence. When the cycle option is set you have the ability to re-use sequence numbers.


Similarly, Sequences can be created using T-SQL as follows.


IF EXISTS (SELECT * FROM sys.sequences WHERE name = N’EmployeeSeq’)
DROP SEQUENCE EmployeeSeq;
GO


CREATE SEQUENCE EmployeeSeq AS tinyint
START WITH 0
INCREMENT BY 5;
GO


Now let us see how we can integrate this with an Insert statement.


First we will create a table to incorporate the sequence we created.


CREATE TABLE Employee
(ID tinyint, Name varchar(150) )


Then we will insert:


INSERT INTO Employee
(ID,Name)
VALUES
(NEXT VALUE FOR EmployeeSeq, ‘Dinesh’)
INSERT INTO Employee
(ID,Name)

VALUES
(NEXT VALUE FOR EmployeeSeq, ‘Asanka’)

Note that you are now using the EmployeeSeq sequence object for the insert.

Reasons why you must trust ASPHostPortal.com


Every provider will tell you how they treat their support, uptime, expertise, guarantees, etc., are. Take a close look. What they’re really offering you is nothing close to what
ASPHostPortal does. You will be treated with respect and provided the courtesy and service you would expect from a world-class web hosting business.

You’ll have highly trained, skilled professional technical support people ready, willing, and wanting to help you 24 hours a day. Your web hosting account servers are monitored from three monitoring points, with two alert points, every minute, 24 hours a day, 7 days a week, 365 days a year. The followings are the list of other added- benefits you can find when hosting with us:


-
DELL Hardware
Dell hardware is engineered to keep critical enterprise applications running around the clock with clustered solutions fully tested and certified by Dell and other leading operating system and application providers.
- Recovery Systems
Recovery becomes easy and seamless with our fully managed backup services. We monitor your server to ensure your data is properly backed up and recoverable so when the time comes, you can easily repair or recover your data.

- Control Panel
We provide one of the most comprehensive customer control panels available. Providing maximum control and ease of use, our Control Panel serves as the central management point for your ASPHostPortal account. You’ll use a flexible, powerful hosting control panel that will give you direct control over your web hosting account. Our control panel and systems configuration is fully automated and this means your settings are configured automatically and instantly.

- Excellent Expertise in Technology
The reason we can provide you with a great amount of power, flexibility, and simplicity at such a discounted price is due to incredible efficiencies within our business. We have not just been providing hosting for many clients for years, we have also been researching, developing, and innovating every aspect of our operations, systems, procedures, strategy, management, and teams. Our operations are based on a continual improvement program where we review thousands of systems, operational and management metrics in real-time, to fine-tune every aspect of our operation and activities. We continually train and retrain all people in our teams. We provide all people in our teams with the time, space, and inspiration to research, understand, and explore the Internet in search of greater knowledge. We do this while providing you with the best hosting services for the lowest possible price.

- Data Center

ASPHostPortal modular Tier-3 data center was specifically designed to be a world-class web hosting facility totally dedicated to uncompromised performance and security
- Monitoring Services
From the moment your server is connected to our network it is monitored for connectivity, disk, memory and CPU utilization – as well as hardware failures. Our engineers are alerted to potential issues before they become critical.

- Network
ASPHostPortal has architected its network like no other hosting company. Every facet of our network infrastructure scales to gigabit speeds with no single point of failure.

- Security
Network security and the security of your server are ASPHostPortal’s top priorities. Our security team is constantly monitoring the entire network for unusual or suspicious behavior so that when it is detected we can address the issue before our network or your server is affected.

- Support Services
Engineers staff our data center 24 hours a day, 7 days a week, 365 days a year to manage the network infrastructure and oversee top-of-the-line servers that host our clients’ critical sites and services.

 

 

 



About ASPHostPortal.com

We’re a company that works differently to most. Value is what we output and help our customers achieve, not how much money we put in the bank. It’s not because we are altruistic. It’s based on an even simpler principle. "Do good things, and good things will come to you".

Success for us is something that is continually experienced, not something that is reached. For us it is all about the experience – more than the journey. Life is a continual experience. We see the Internet as being an incredible amplifier to the experience of life for all of us. It can help humanity come together to explode in knowledge exploration and discussion. It is continual enlightenment of new ideas, experiences, and passions


Author Link


Corporate Address (Location)

ASPHostPortal
170 W 56th Street, Suite 121
New York, NY 10019
United States

Sign in