PowerBI and On-Premise Refresh

In my role as a BI Analyst I have tinkered with many different main-stream business analytics and visualization platforms but so far my favorite is Microsoft’s PowerBI. Even though it seems like this platform is constantly in development (Microsoft releases monthly updates to both PowerBI Desktop and the web application), it feels like Christmas every single time. All of the functionality Microsoft is building in is very impressive. You can create your own visualizations while programming in R, you can create and manage report parameters, manage and edit queries and see your results in real time, etc. but one of the coolest parts, to me, is that you are able to create and manage all report data sources and datasets in one central location through what Microsoft has called the “Data Gateway”.


Data Gateway

This gateway hold the information needed for connection strings to your SQL instances and data sources (much like in SSRS where you are able to utilize shared data sources). Of course, Microsoft also has created a downloadable program that will establish your own personal gateway or the gateway manager for your local machine. This makes sense to install directly to the server housing your instance because ideally you never want your servers to go down and the personal gateway requires a constant connection for scheduled refresh, otherwise the refresh will fail. How easy is it to manage your sources using the application? I’ll show you.

First, log into app.powerbi.com and log in using your Microsoft account. Once you’re there navigate to the “Manage Gateways” page through the gear at the top of the initial work-space. Once there select “Add Data Source” and begin filling out the information to connect to an instance of SQL Server and the administrators for this connection. Filling out department and description are optional.

Gateway1

Next create your connection to your data source(s). Under “Data Source Settings” place all of your credentials and your data source type into the page and test your connection to make sure PowerBI can reach your data. You can even set privacy and authentication modes!

Gateway2

Now that your gateway is established go ahead and add in whatever data you wish to use for a report. Upload it directly to PowerBI either through PowerBI Desktop or import the file into PowerBI and select it. Right click and select “schedule refresh” to create your refresh schedule and set a data gateway for your new data set.

Gateway3

Once you are looking at your datasets select one and begin allocating a gateway and how often, if ever, you wish your data to refresh and stay up to date.

Gateway4

It’s that simple! Now you are ready to show your reports and not have to worry about old, stale data.


Side Notes

PowerBI scheduled refresh and on-premise refresh are only available in the ENTERPRISE version of PowerBI. If you do not have this then you should get it. The increased space from 1GB to 10GBs per work space is worth it alone.

PowerBI does not handle any pbix files (PowerBI report files) over 1GB. So make sure that when you create your reports that you aren’t importing more than 1GB of data into your report. Keep your datasets as simple as possible to improve performance and save space.

PowerBI and On-Premise Refresh

SQL Server System Databases

For a large majority of SQL developers and BI analysts we will tend to access instances of SQL Server and go directly to the data to pull what we need. We log in, select our user database(s), write queries, and generate datasets from them. But what about those 4 visible databases that we see in our Object Explorer all the time? They just sit there and we don’t entirely know what they do or what we can do with them. Well, I’m going to spell it out for you.

deadpool ryan reynolds wade wilson pointing this guy has the right idea


master

Some of us have actually used this database before and understand the jist of what this database does and the information that it holds. To clarify, the master database is a system database that contains all of the system-level information for that instance of SQL Server. This means it contains all logins, linked servers, all endpoints, database information and objects, and other configuration settings will be stored here. This database must be present otherwise SQL Server will not start.

Pro Tip: Always make sure you take regular backups, otherwise any corruption of the master database will result in you having a giant headache and all of your changes will be lost in space somewhere

When maintaining and backing up this database always remember, as a rule of thumb, that you shouldn’t ever create user object in master. Otherwise it will need to be backed up much more often.


model

This database does exactly as the name describes; it is a model for any new and upcoming databases. The model database contains properties, users, stored procedures, tables, views, etc. and can be changed at any time without disturbing currently existing databases. Using this you can create your ideal model for every new database and pre-load it with users and stored procedures, making life ultimately easier.

This is the only function allowed to this database. It does nothing else.


msdb

If you are familiar with any DBA responsibilities or functions then you will be, at least, a little familiar with msdb. This is because msdb is used by the SQL Agent, database mail, Service Broker, log shipping, and other services. This also holds the backup history for the instance.


tempdb

Out of all of the databases on an instance, tempdb is probably the busiest. It is the background workspace that SQL Server uses for ERRYTHANG! This includes query processing and sorting, temporary tables, snapshot isolation, user-created temporary tables, table variables, etc.

SQL Server will always drop and re-create the tempdb every time the SQL Server service is started. Once the server restarts everything you once had will be gone.

Maintaining and keeping the tempdb properly sized is vital for the performance of your SQL Server instance. By default, tempdb starts at 8MB in size and will grow by 10% until the drive is full. This means that system resources will be spent during a large dataset resulting query just to be able to accommodate for it.


Resource

There is actually one more system database that exists, although you cannot see it. This is a hidden system database where system objects are stored. By normal methods you will not be able to see this; however, you are able to access the data files by navigating to C:\Program Files\Microsoft SQL Server\MSSQL<version>.MSSQLSERVER\MSSQL\Binn.

Typically, the only way to access the contents of this database is by using the OBJECT_DEFINITION system function.

SQL Server System Databases

JSON Compatibility

Well everyone SQL 2016 is now available to the public and with it comes fucntionality that everyone has been asking and waiting for, JSON support! With today’s world utilizing more JSON for it’s data interchange instead of just XML, the requests for native JSON support in DBMSs have been increasing. SQL Server answered those developers that were requesting this new support and has provided local JSON functions and full native functionality for handling JSON items. Before we jump into the new JSON functionality lets review what JSON is and why it’s good.

What is JSON?

JavaScript Object Notation (JSON) is a way to store information in an easy-to-read, easy-to-access manner. It essentially gives us a more human-readable format of the collected data that can be accessed in a highly logical way. With more and more websites using AJAX in their back end it’s becoming much more important for websites to be able to load data very quick and asynchronously. I say asynchronously because it runs the request for data in the background as to not cause page deadlocks.

These AJAX powered websites all provide what are known as RSS feeds, which are very easy to import and use on the server-side but when loading with AJAX you run into an error: you can only load an RSS feed if you are requesting it from the same domain that it’s hosted on. JSON uses a method called “JSONP” that uses a callback function to push the JSON data back to whichever domain you’re on, thus allowing you to easily store the JSON data and call it whenever you need it.

For a quick reference into what JSON actually looks like see below:

Example

First let’s create a table that will store some test data about a few of our favorite super heroes.

CREATE TABLE [dbo].[People]
(
[Id] [INT] NULL ,
[FirstName] [VARCHAR](50) NULL ,
[LastName] [VARCHAR](50) NULL ,
[Address] [VARCHAR](50) NULL ,
[City] [VARCHAR](50) NULL ,
[State] [VARCHAR](50) NULL ,
[Zip] [VARCHAR](50) NULL 
)
ON [PRIMARY];
GO

Let’s add in our super heroes.

INSERT INTO dbo.People
( Id ,
FirstName ,
LastName ,
Address ,
City ,
State ,
Zip
)
VALUES ( 1 ,
‘Captain’ ,
‘America’ ,
‘1234 Freedom Street’ ,
‘Ann Arbor’ ,
‘MI’ ,
‘48108’
),
( 2 ,
‘Iron’ ,
‘Man’ ,
‘5678 Genius Ave.’ ,
‘Chelsea’ ,
‘MI’ ,
‘48118’
),
( 3 ,
‘Black’ ,
‘Widow’ ,
‘910 Russian Lane’ ,
‘Detroit’ ,
‘MI’ ,
‘48108’
)

Now that we have a table filled with data let’s pull all of it out and convert to a JSON item. For this example let’s simply grab everything from the table and return it using JSON AUTO.

SELECT *
FROM dbo.People
FOR JSON AUTO

You will notice in the above function that there is a section labeled “FOR JSON AUTO” and I know exactly what you’re thinking, it looks exactly like how you have to tell SQL to return XML data type. Of course Microsoft would fit JSON in the same way, it makes total sense. Using FOR JSON AUTO works the same way as FOR XML AUTO and will simply return the JSON format of the data you request. The result returned from SQL is below as well as the formatted version of the JSON.

SQL:

JSON1

Formatted JSON:

[
{
“Id”:1,
“FirstName”:”Captain”,
“LastName”:”America”,
“Address”:”1234 Freedom Street”,
“City”:”Ann Arbor”,
“State”:”MI”,
“Zip”:”48108″
},
{
“Id”:2,
“FirstName”:”Iron”,
“LastName”:”Man”,
“Address”:”5678 Genius Ave.”,
“City”:”Chelsea”,
“State”:”MI”,
“Zip”:”48118″
},
{
“Id”:3,
“FirstName”:”Black”,
“LastName”:”Widow”,
“Address”:”910 Russian Lane”,
“City”:”Detroit”,
“State”:”MI”,
“Zip”:”48108″
}
]

We must keep in mind that this is just JSON support and not the native JSON type. In SQL Server 2016 the JSON data will be represented as the NVARCHAR data type. But what happens if we want to reverse the process? Take a JSON object and then transform it into a relational table? Microsoft built in conversion for this as well! Let’s use the previous result as the base for the transformation from JSON object to relational table.

DECLARE @JSON NVARCHAR(MAX)=(
SELECT Id ,
FirstName ,
LastName ,
Address ,
City ,
State ,
Zip
FROM dbo.People
FOR JSON AUTO)

SELECT Array.Id ,
Array.FirstName ,
Array.LastName ,
Array.Address ,
Array.City ,
Array.State ,
Array.Zip
FROM OPENJSON(@JSON,’$’)
WITH (Id INT,
FirstName VARCHAR(50) ,
LastName VARCHAR(50) ,
Address VARCHAR(50) ,
City VARCHAR(50) ,
State VARCHAR(50) ,
Zip VARCHAR(50)
) AS Array

Much like XML paths you are able to reference specific paths in JSON objects. The dollar sign ($) represents the input JSON object and is using a JavaScript-like syntax for referencing properties within JSON text. A few examples include:

  • ‘$’ – references the entire JSON object
  • ‘$.property1’ – references property1 in the JSON object
  • ‘$[5]’ – references the 5th element in the JSON array

Other JSON Functions

  • ISJSON() – this checks if the NVARCHAR text input is properly formatted to JSON specifications
  • JSON_VALUE() – parses through the JSON text to extract scalar values on the path
JSON Compatibility

Formatting dbo.sp_send_dbmail

Recently I received an integrations request to create a globally standardized inventory system for all clients and make projections based off of the current inventory counts and frequency of sales. During this process a data steward has requested the information that returns as NULL (data that never had a home in the standardized table) so that they can see if any client-side inventory items should be added into the standardized inventory list.

To do this I pulled the data from a staging table that returned NULL and used that query to formulate the data I wanted to send to the steward. The basic format requested was CSV. I decided to go about sending the mid-integrations data using msdb.dbo.sp_send_dbmail. Since dbmail has so many different parameters I won’t get into them but you can find more documentation on each parameter on the msdn site here. My sample code is below:

DBMAIL1

There are lots of parameters that you will notice, most of them being very obvious to sending mail like the profile name, recipients, copy recipients, email subject, and email body. But there is even more functionality that you are able to utilize, so let’s take a look.

  • @query – a query ran by dbmail to be included in the email sent
  • @attach_query_result_as_file – specifies whether the results of the query are returned as an attached file within the email
  • @query_attachement_filename – specifies the filename to the attachement
  • @query_result_header – specified whether the query results will include column headers or not
  • @query_result_width – used for formatting the results of the query
  • @query_result_separator – character used to separate the columns in the results
  • @query_result_no_padding – specifies whether you want the results padded or not

This alone is very useful information and can get you very far but there’s one small thing that will mess everything up if you leave the information the way it is and it’s due to an exportation format from query results. Whenever you run a query you usually return them in “grid” format but dbmail likes to use the text results for it’s exporting needs. But why is this a problem? Let’s take a look.

When running a simple query and returning in grid mode you will receive the following results.

DBMAIL3

This is normally how we look at data when using SQL Server. Grid mode is the most similar result set to looking at an Excel spreadsheet. From this format we would assume that dbmail would understand and automatically map the data into each respective cell upon export, but this is definitely not the case. Since dbmail uses the text results let’s query in text mode and see what will cause the problem.

DBMAIL4

If dbmail was to export text EXACTLY the way it is now you would receive all shown data but you will also have a dashed line taking up all of row #2 on the Excel spreadsheet. This will also throw off how the new CSV document will export and render locally inside Excel because, even though we used the @query_result_separator parameter, the CSV document has a hard time figuring out that there are multiple columns and will bunch all exported data together into column “A”. This is obviously not what we want. To fix this you can use the small hack below to force Excel to recognize the separation.

DBMAIL2

What this is doing is that, by including “sep=,”, you are forcing Excel to understand that the file is comma delimited and will open properly. We will also set the @query_result_width to 32767 (the max number) mainly to alleviate any word wrap that may occur. Now when my data is sent off to the data steward during the integrations process he will receive a CSV document that will be much easier to manipulate and digest compared to the smashed mess of data that we produced earlier.

Formatting dbo.sp_send_dbmail

Using OPENQUERY()

When compiling data from many different sources you may need to reach through to linked servers to add to your datasets. This type of task can take a heavy toll on your query execution times and on your overall query performance. Instead of the query optimizer staying within a single instance of SQL Server it now has to process the best route available to reach across to another instance to grab your data. For very large datasets or datasets with heavy aggregations this can pose a problem and will take minutes to run.

Luckily for us Microsoft included an open rowset function to allow a specified pass-through query to execute on a specified linked server. What this means is that you are telling the query to process a query that you provide to run as if it were natively on the specified instance and return the results to the client instance. Let’s look at an example:

I am logged into an instance of SQL Server that I am pulling data out of, let’s call this SQL1. From this instance I am given the task to pull data that lives locally but I also need to grab information that exists in another instance/server all together. These 2 instances of SQL Server are set up as linked servers, giving them the ability to communicate with one another. You have 2 ways to go about this; 1) write your query to manually jump from your current instance and pull your related data in, or 2) grab the dataset that you wish to filter using OPENQUERY and then relate your data together. Which one is faster and which one will work better overall? Let’s find out.

I run the below query, which ultimately takes 9 seconds to run and returns 5401 rows after chewing through millions of rows to filter down to a certain date.

OPENQUERY1

When looking at the execution plan for this distributed query you receive the following:

OPENQUERY2

Using remote queries is expensive, period. The above execution plan shows that all of the I/O is being allocated to running a “remote query” which means that it’s creating a connection through the linked server at run time to pull the information through little by little. The OPENQUERY results will be a little bit different.

OPENQUERY3

OPENQUERY query execution plan:

 

OPENQUERY4

The first thing I noticed when using OPENQUERY was how fast it was to run. Instead of taking 9 seconds like the previous query did it took less than one and also generated a different execution plan. Now you’ll see that the query is using a “remote scan” to scan the table index locally after creating a remote connection and running the query locally through the connection.

Using OPENQUERY()

Utilizing Client Statistics

When it comes down to SQL Server I will always be curious. Curious as to what these methods/functions are, how they work, why they work the way they do, and how I can better utilize these methods/functions. SQL Server has so many different functions that it almost seems impossible to get through them all due to it being such a powerhouse of a program. So yesterday I was in search of a new button to click and found the “Include Client Statistics” button. Having no idea what exactly client statistics are I decided to click on it and see what it would do and boy was I amazed! This little tool is absolute magic!

Essentially, the Client Statistics tool will display grouped information about the query that you are running. Although this is not as robust as the Actual Execution Plan, which shows how your query can be improved the most, the Client Statistics gives you information that you would probably need to memorize/keep track of on a sticky note as you write your query. Let me show you what I mean:

Activate Client Statistics by using either the button in the SQL Editor Toolbar or by right-clicking and selecting “Include Client Statistics” from the drop-down menu.

Toolbar:

CS1

Menu:

CS2

From here write your query and execute it. You will then receive the following output tab with similar information.

CS3

There is some extremely valuable information that’s given here such as the number of SELECT, INSERT, UPDATE, and DELETE statements. You can see how many transactions were initiated, the number of rows returned, and the processing/execution time. The Network Statistics inform you as to how much traffic is moving from your client to the server and back. The Time Statistics will tell you how much time was spent processing from the client and how long the request spent waiting for the server (all in milliseconds).

This tool can be very useful just after one run but, just as things usually get better with time, using multiple runs during your query tuning can be even better. The statistics will be shown for up to 10 executions which SQL Server names “Trial #” and will average them all together in the end. Therefore, as you make changes to your query you are able to see the performance changes at a glance, they even come with colored arrows (green for an increase in performance and red for a decrease in performance)! This is shown below.

CS4

There’s a ton to keep an eye out for in SQL Server. Just as my friend and VP of IT says, “It’s just one giant toolbox holding the tools you need to get the job done and you can’t do a thing until you know a thing.” So keep an eye out for new things and don’t hesitate to learn them as you go. It’ll only bring you one step closer to being a SQL god.

 

 

Utilizing Client Statistics

Stored Procedures to Determine Index Creation

I really want to cover this topic because of a recent event in which reporting services would not render a report I created due to time outs, which were mainly caused by performance issues due to lack of any non-clustered (or any other kind) of indexes. Not many BI and data analysts truly understand when and why an index, or multiple for that matter, needs to be configured and set up properly. Most of the time the DBA would ideally have these put in place for us so we don’t pay as much attention. In this post I will show a couple different stored procedures you can use to determine which indexes need to be created. I will write another post over the next couple of days to dig down into the nitty-gritty details of how indexes work, how they impact performance, and why they need to be created.


 

Method #1: Using Dynamic Management Views (DMVs)

This method was shown to me by our VP of IT and it proved to be a very useful tool for identifying basic indexing needs. This stored procedure creates a weighted identifier (Index Advantage) that will show you which tables are being hit the most and have the most costly queries. The more costly the query, the more the server gets bogged down with load. So how do we fix this? By creating an index. When creating your databases and tables always remember that you want to make the system work as little as possible and let the tools handle most of the load, so when you need to constantly pull data back out of your database you can do so with ease.

CREATE PROCEDURE spIndexAnalysis (@Database VARCHAR(500))
AS

SELECT ( user_seeks * avg_total_user_cost * ( avg_user_impact * 0.01 ) ) AS [Index Advantage] ,
‘CREATE NONCLUSTERED INDEX IX_’ + OBJECT_NAME(c.object_id) + ‘_’
+ LEFT(CAST(NEWID() AS VARCHAR(500)), 5) + CHAR(10) + ‘ ON [‘
+ OBJECT_NAME(c.object_id) + ‘](‘
+ CASE WHEN c.equality_columns IS NOT NULL
AND c.inequality_columns IS NOT NULL
THEN c.equality_columns + ‘,’ + c.inequality_columns
WHEN c.equality_columns IS NOT NULL
AND c.inequality_columns IS NULL
THEN c.equality_columns
WHEN c.inequality_columns IS NOT NULL
THEN c.inequality_columns
END + ‘)’ + CHAR(10)
+ CASE WHEN c.included_columns IS NOT NULL
THEN ‘INCLUDE (‘ + c.included_columns
+ ‘) WITH ( FILLFACTOR = 90 )’
ELSE ”
END AS [T-SQL]
FROM sys.dm_db_missing_index_group_stats a
INNER JOIN sys.dm_db_missing_index_groups b ON a.group_handle = b.index_group_handle
INNER JOIN sys.dm_db_missing_index_details c ON c.index_handle = b.index_handle
WHERE DB_NAME(database_id) = @Database
AND equality_columns IS NOT NULL
ORDER BY [Index Advantage] DESC

Always remember that you shouldn’t replicate an index. If an index is currently referencing a specific column make sure you DON’T reference that same column in another index. This will cause you to write the indexed data to disk multiple times, causing space to be filled up much more quickly.


 

Method #2: Use BlitzIndex

Many of you may know about the stored procedure that is online and available to download for free called sp_BlitzIndex. I downloaded this tool and have been playing around with it for a couple of days and it is VERY useful. You can download it at the following link:

https://www.brentozar.com/blitzindex/

This stored procedure will loop through your tables at various levels and determine the status of each table, if there’s anything wrong with the indexes that currently exist, or if there is one that needs to be created. It’s a complicated stored procedure, but that’s because it’s packed with a TON of features.

Stored Procedures to Determine Index Creation

SSRS 2016 – It’s Coming!!!

It’s big, it’s bad, and it’s going to blow all other versions of SSRS away! There’s a lot of cool new features coming in Microsoft’s SQL Server Reporting Services 2016. There will be a complete overhaul on the User Interface as well as a bunch of other improvements throughout. If you’ve checked out CTP 3.3 of SSRS 2016 you will notice some of the graphical changes right away, one of the biggest ones is the DataZen-like UI. Now we know what Microsoft decided to do with the DataZen acquisition! Let’s dig deeper.


SSRS Web Portal (Native Mode)

All users of previous versions of SSRS will know what the web portal is. After installation of SSRS along with SQL Server you can access a web-based experience that allows you to view reports and navigate through the elements that are stored within that report server instance. You are able to use the web portal to do the following tasks:

  • View, print, search, and subscribe to reports as well as creating subscriptions for other users
  • Create and maintain folder hierarchies
  • Create security for users and reports
  • Create report history and configure parameters and properties
  • Create and store data sources and connection strings
  • Create data-driven subscriptions

Even though these are all current functions SSRS has improved them and created new ones as well. You can now:

  • Create KPIs
  • Download the common tools to create reports
  • Compatibility with PowerBI
  • Compatibility with R
  • Pin report items to PowerBI dashboard
  • Custom parameters pane
  • PowerPoint export format
  • HTML 5 rendering
  • New chart types
  • Improved subscription features
    • Enable/disable subscriptions
    • Create descriptions
    • Change the owner
  • Create mobile reports using the mobile report publisher…..

WHAT?! Mobile Reports?!?!?! Yup! SSRS is unveiling a new mobile report builder to create DataZen-like reports for your mobile devices and for the new portal. This alone is worth a look into SSRS for many companies. The best part about it is that SSRS is completely FREE! Pictures are below.

Web Portal:

ssRSPortal

Mobile Report Publisher:

SS_MRP_LayoutTabSmall

New User Interface:

ssrs_rbfacelift_new

ssrs_rbfacelift_old

 

SSRS 2016 – It’s Coming!!!

SQL Collation

Today I want to talk about an important and not fully understood part of SQL Server, collation. Collation effects everything from ordering to query performance.


 

What is it?

Collation is a reference to a certain set of rules that ultimately determine how all data is sorted and compared. Character data is sorted  by the correct sequence which is defined by the rules of the collation and comes with options such as case sensitivity, accent sensitivity, Japanese kana character types, and character width.

More simply put, collation can be thought of as a sort order. In English, collation can be a fairly simplistic as the ordering is done via ASCII code. Using other languages such as Spanish can become more difficult as accented characters and character groupings are now added into the rules (example: “á” or “ll”) and need to be sorted as if they were the same letter.


 

How does it work?

Since collation is a set of rules for how strings of character data are sorted and compared it’s safe to say that we utilize this mostly at run-time when the ORDER BY clause is utilized. For example, from the results of the ORDER BY clause, an English speaker (utilizing SQL_Latin1_General_CP1_CI_AS) would see “Chiapas” before “Colima” in ascending order. A Spanish speaker (utilizing Traditional_Spanish_CI_AS) on the other hand would expect words beginning in”Ch” to appear after words beginning in “C” thus resulting in “Colima” sorting before “Chiapas”.

Collation works a little differently between Unicode strings and non-Unicode strings. When a collation is specified for non-Unicode character data (CHAR, VARCHAR, & TEXT) there is a particular code page that is associated with the collation, which can also be shared between multiple collations.

Unicode character data (NCHAR, NVARCHAR, & NTEXT) do not have associated code pages because Unicode data handles most universal characters.


 

What does each part of the collation description mean?

There are multitudes of collations that exist within SQL Server and picking the collation that’s right for your business can be tricky. For most American companies SQL_Latin1_General_CP1_CI_AS is the default. Luckily for us Microsoft has provided us with a system function to display the collations and their descriptions (sys.fn_helpcollations()).

SELECT name, description
FROM sys.fn_helpcollations()

Collation

You can see that each collation has a general description, a case sensitivity setting, an accent sensitivity setting, a kanatype sensitivity setting, a width sensitivity setting for Unicode data, and a sort order and code page for non-Unicode data.

For SQL_Latin1_General_CP1_CI_AS the description is as follows:

  • Latin1-General
  • Case-insensitive
  • Accent-sensitive
  • Kanatype-insensitive
  • Width-insensitive for Unicode data
  • Sort Order 52 on code page 1252 for non-Unicode data

 

How can we use it?

SQL Server allows us to not only set a default collation for each database but we are able to alter collation on the fly! Thankfully Microsoft built in a function known as COLLATE.

DECLARE @String VARCHAR(10)
SET @String = ‘abcdefg’

IF @String = ‘ABCDEFG’
PRINT ‘Match’
ELSE
PRINT ‘No Match’

If your SQL Server default collation is set to SQL_Latin1_General_CP1_CI_AS then this query will result in a “Match”. But what if we use collate to set the collation on the fly?

DECLARE @String VARCHAR(10)
SET @String = ‘abcdefg’

IF @String = ‘ABCDEFG’ COLLATE SQL_Latin1_General_CP1_CS_AS
PRINT ‘Match’
ELSE
PRINT ‘No Match’

This now results in a “No Match” due to the collation changing from case-insensitive to case-sensitive at run-time. If we change the value of @String to all uppercase then we will receive a match again using this collation change. If any part of the string does not match the case of the argument then it will return as “No Match”.

SQL Collation

Visibility with multivalued parameters

Every SQL Server Reporting Services developer knows of the magical “hidden” property that lies within each object placed in your report. Using this property will allow you to hide certain objects on command and can be used for many things. You can choose to hide columns in a tablix, hide your charts based off of a parameter, etc. In one of my latest reports I had to get SSRS to show/hide details based off of a multi-valued parameter. Hiding off of a single parameter using an expression is easy and everyone knows it, but showing/hiding based off of how many selections were made in a single parameter? I didn’t know how and I needed to find out. After some research I came across article after article explaining various methods to achieve this but none of them seemed to work except for 2. These methods are as follows:


Method #1: Using COUNTROWS()

This method worked great up until I ran into the caveat which was an all or none scenario. Either I choose all and hide the details or I choose any other number and the details are shown. This isn’t entirely what I was hoping for as I wanted to show all details when only ONE value was chosen and no details when 2 or more were chosen. This method would show all details as long as the count of selected values was less than the count of rows in the dataset. This could be useful in some situations but not in mine. The following expression is as follows:

=IIF(COUNTROWS("dataset").Equals(Parameters!parameter.Count),true,false)


Method #1: Using a boolean expression

This method was much more simplistic and met my requirements easily. In the code from method #1 you will notice the “Parameters!parameter.Count” section. This function counts the number of selections made within a parameter. Since my results are based off of a boolean value anyway why don’t we just create a boolean expression and let the results determine the outcome? This theory worked perfectly, code for example:

(Parameters!parameter.Count) > 1)

Such a simple answer and I didn’t even know it. But now we know and knowing is half the battle.

psa gi joe fensler body massage

Visibility with multivalued parameters