What Are We Doing Here, Exactly?

There’s a secret about being a developer that I forget all the time.  I know it, and I should remember it, but in the daily drama of life, I tend to forget it.  Here it is:  Knowing how to do something is the easy part.  Knowing what to do… that’s hard. 

Technical skills allow me to execute on a plan.  The good news is that if I don’t know how to do something, there is a wealth of resources out there to help me out.  I can probably pilfer a bit of code from a blog, find a checklist, or even call a friend.  Knowing the plan?  That’s the hard part. 

Brent Ozar (Blog|Twitter) wrote a brilliant post about being a consultant.  It was so brilliant and apropos that I e-mailed him to ask for advice on a few things I’m dealing with.  He was awesome, and thoughtful, and gave me some great ideas.  He even recommended a book, The Secrets of Consulting by Gerald Weinberg.  I read it, and I immediately felt better.  Everyone should read it.  It validated something that had been creeping around in the recesses of my brain:  I didn’t have a good plan.

Why not?  Well, I’d been so busy executing that I’d forgotten to take a step back and think.  It happens to the best of us.  I tend to be an intuitive person.  I feel like something’s not right long before I can put my finger on it.  It makes me crazy.  It’s like the intelligent part of my brain is whispering, “Audrey… Audrey… Pay attention.  This isn’t working”, while the rest of my brain is totally focused on crossing things off the to-do list.  The problem?  Maybe the to-do list is dead wrong. 

So that’s what development managers and project managers are for, right?  They put the plan together. They figure out the what, we figure out the how.  Right.  Right?  Hogwash!  Yeah, I said it.  Hogwash. 

Here’s what I believe.  Every person involved in a project, from the college intern to the CTO needs to do a personal assessment of the project they’re on.  I’d flat forgotten this personal belief of mine.  In the rush to deliver, I’d jumped headfirst into a project without first grounding myself.  So, after talking with Brent and reading Weinberg’s book, I assigned myself a task:  Assess the Project. 

Now, I’m not the first person to do this, and I’m certainly not the first person to talk about this.  But I constantly have to remind myself to actually do it.  It is a liberating exercise.  Putting onto paper what’s worrying me about a project makes it real.  If it’s real I can do something about it, or at least see it coming before it smacks me in the face. 

I have a few personal rules for my assessments: 
1)      It is a personal document.  For my benefit.  I might use it as a reference for later communication, but for now, it’s just me and my stream of consciousness.  I don’t worry about sounding negative or hurting feelings.  If I think it’s going to be really bad, I might even write it at home and on the personal computer.

2)      It is mostly a problem-defining exercise, not a problem-solving exercise. 

3)      No edits till I’m done.  None.  No tweaking, rewording, or rethinking.  This one is hardest for me.  I can’t help myself sometimes, and the urge to soften a harsh word or begin in-line rationalizing is tough to resist. 

4)      This is not a technical document.  It is an emotional document.  Everything from “I don’t know how to do X” to “Mr. End User refuses to cooperate” is fair game. 

Anyway, here’s my process.  I ask myself some questions, and answer them.  Really, it’s just a set of lists. 

1)      What is the current state?  – What’s going on in the business that prompted this project in the first place?  What needs to be improved/created/maintained?  Is the system too slow?  Are users complaining?  Are we losing customers? 

2)      What is the desired state?  – What does everyone want the world to look like when this project is done?  Is capacity higher?  Turnaround faster?  Errors reduced?  Is there a totally new process?  Is there a shiny new system? 

3)      What are the problems? – (Remember, we can use the politically incorrect “problem” because it’s a personal document) What’s keeping us from getting to the desired state?  What issues do we keep tripping over?  Who’s being difficult or unrealistic?  Is the schedule reasonable?  What am I awake at 4:00 AM worrying about? 

4)      What can I fix? – Here, I sort of break one of my rules.  I try to identify what I can fix that’s broken.  Key point:  What, not how.

a.       Right Now – What can I do right now without anything else happening first?  I don’t worry about time or resources; I just list everything I could theoretically fix.
b.       Right After – If I fix the things I could fix right now, what’s next?
c.       And Then?  – If I can theoretically get through the “Right Now” and “Right After”, what could I do? 
 
TANGENT 1:  It’s interesting to see if putting together these three lists naturally gets me to the desired state I defined in List 2.  BUT… I resist the urge to force it.  Be honest.
 
5)      What is my conclusion? – This is the part where I get to rant.  I just start writing about where I think this project is headed.  Hopefully, the first 4 lists I’ve put together have helped me get my head on straight.  If not, well, that tells me something too.  Seriously, I rant.  I tell it like I think it is.  No one is going to read what I say except for me.  Do I believe the project is going to fail?  I say it.  Then say why.  Do I think we need to go in a different direction?  I put it down.  Do I think I’m failing to deliver?  Why?  It’s the most liberating part of this process.  Feels like confession.

6)      What questions do I have? – I read back over my first 5 lists, and start writing down any questions I can’t answer.  Doesn’t matter how big or stupid or rude.  In the past, I’ve written thing like, “Does anyone care if this project succeeds?”, and “Can we hit deadline if [Name Redacted] keeps screwing up?”  I might never ask these questions out loud, but it’s therapeutic to ask myself.  I might even come up with a few that need real answers that I can ask in public and look proactive and smart. 

TANGENT 2:  If I can’t put the 6 lists together off the top of my head, this is a giant, flaming red flag.  If I can’t define where we are, or where we want to go, or what I can do to help get us there, I’ve got real problems. 

So, I’ve poured my thoughts and worries and soul into answering 6 basic questions.  What now?  I put it away.  I leave it alone to marinate for a day.  Then, I open it back up again.  I read it and try to see what the basic feel is.  Is it optimism or despair?  Was I overly negative, or did I apply false optimism to my lists?  And, most important, do I see the beginnings of a plan? 

Ninety-nine times out of a hundred, I see things I could be doing differently.  I can begin to filter out the things I can improve versus the things I have no control over.  I usually see a plan emerge.  I see a way out of whatever hole I’ve dug for myself (or been thrown into).  Most importantly, I have a lodestone in this assessment next time the manager asks me what I think.  I’ve already thought about it and put it on paper, and I’m not fumbling around trying to describe some general feeling of “Not Rightness”.

I don’t believe that there are impossible projects, but I do believe that there are impossible plans.  More than there should be, actually.  I know that Development Managers get sick of hearing us whine about the plan.  But, if I can say, “I have a few specific questions and ideas about the plan”, they usually sit up and listen.  See, there’s another dirty little secret that my dear friend Josh Lane told me once:  We all think there are these experts out there that have all the answers.  Guess what?  There isn’t.  It’s us.  We’re it.

Scary as hell?  Yes.  But it also means that it’s on us as developers to not just solve problems, but to help define them as well.  Ask anyone in our business… it really is harder than it looks.

Noir SQL… Or, a Hardboiled Approach to Getting the Job Done

You can tell a lot about my state of mind by the books I’m reading. Lately, it’s Urban Fantasy with a Noir feel to it. Specifically, I’m reading Mike Carey’s Felix Castor series, and I just finished a book by Richard Kadrey called Sandman Slim: A Novel. I love the anti-hero. The protagonist who is gritty and dirty and has a few great scars is my kind of guy. He unapologetically breaks the rules and isn’t all, “it’s more about the journey than the destination.” For him, destination is what matters, no matter now you got there.

Lately, I feel a bit like the scarred anti-hero. I’m doing some things in a production environment that I’m not totally thrilled about, and I wish I could stop the line and do things the “right” way. I want to use SSIS to transform data. I want to encapsulate processes into neat, repeatable, parameterized modules. But, you know what? When there’s a same-day turnaround on a request, you make do. You go a little Noir on your T-SQL, know what I mean?

I want to show you two things that I’ve actually done in the past few weeks. No, given a nice, neat environment, this SQL might never have been written. Am I proud of it? Well, yes. Yes I am. At the end of the day, I got the customer what he needed. Was it pretty? No. I’m cool with that. Being the anti-hero is kind of fun every once in a while.

Fixed-Width Output

I needed to give a guy a text file in fixed-width format. I had a process from my predecessor that just wasn’t working. The file was already late. So here’s what I did. I’m using the AdventureWorks database to show an example.

SELECT
	LEFT((ISNULL(Title,'')+SPACE(50)), 8)+
	LEFT((ISNULL(FirstName,'')+SPACE(100)), 20)+
	LEFT((ISNULL(LastName,'')+SPACE(100)), 30)+
	LEFT((ISNULL(MiddleName,'')+SPACE(100)), 5)+
	LEFT((ISNULL(EmailAddress,'')+SPACE(100)), 35)+
	LEFT((ISNULL(Phone,'')+SPACE(100)), 25)
FROM AdventureWorks.Person.Contact	;

The result:

Paste it into Notepad and see how it looks:

I save the text file and send it on. Pour myself a whiskey, neat, and light up an unfiltered Lucky Strike.  Okay, not really, but you know what I mean. 

A quick run-down:

ISNULL: If any of the values I’m concatenating are NULL, then the entire string will come back as NULL. I wrap all of my columns in ISNULL like so:

ISNULL(Title, ‘’)

This sets the value to an empty string if the value is NULL.

SPACE: This handy little string function will pad the given number of spaces onto the result you return. I want to make sure I end up with enough padded spaces to fill out the fixed-width portion of that column. So, I pad the output:

ISNULL(Title, ‘’)+SPACE(50)

This will give me the output from the Title column, plus 50 spaces.

LEFT: Now, not every value coming out of the database is going to have the exact same number of columns. So, I use the LEFT function to trim it down to the exact length I want. LEFT will take the left-most number of characters you tell it to. If I say,

LEFT((ISNULL(Title,”)+SPACE(50)), 8 )

I’m telling it to give me characters 1-8 that are returned. Since I’ve padded my output with spaces, it’ll be the result from the column, plus as many spaces as I need to pad the output to 8.

Pretty? No. Functional? Yes. Noir SQL? Absolutely.

Remove Unwanted Characters

Next up, I have a source file I use from another department. It comes in Excel format, and includes a phone number. I’m supposed to get something that looks like this: 1112223333. Nice, neat, simple. What do I get? A hodge-podge of phone number formats. I’m looking at something like this:

CREATE TABLE PhoneNumber
(
	PhoneNumber varchar(50)
); 

INSERT INTO PhoneNumber(PhoneNumber)
VALUES
	('1112223333'), ('(111) 222-3333'), ('111-222-3333'), ('111 222 3333'); 	

SELECT PhoneNumber
FROM PhoneNumber

Okay. So I need to clean these numbers up quickly. Destination, not journey, my friends. I’m the anti-hero. I import the data into SQL Server using the Import/Export utility so I can manipulate the data. Then, I run this nifty little REPLACE statement:

SELECT PhoneNumber,
	CASE
	WHEN ISNUMERIC(PhoneNumber) = 0
		THEN REPLACE(
			REPLACE(
				REPLACE(
					REPLACE(PhoneNumber, '-', ''),			--Strip out dashes
				' ', ''),							--Strip out spaces
			')', ''),								--Strip out close parenthesis
		'(', '')									--Strip out open parenthesis
		ELSE PhoneNumber
	END as FormattedPhoneNumber
FROM dbo.PhoneNumber

Check out the results:

Sweet. It’s quick, it’s dirty, and it saved me having to wait on the source data provider to clean things up on his end. I turn the query into an UPDATE statement, and I’ve got clean data to import.  Again, a run-down of the functions:

ISNUMERIC: Tells me whether the value I’m passing is qualifies as a number or not. NOTE: It recognizes hexadecimal as a number, so use carefully. I set up a CASE statement that asks if the value is numeric. If it is, that means I don’t have any characters like “(“, “)”, or “-“ in there. If not, I apply a nested REPLACE to the value.

REPLACE: Replace is awesome. I can say something like this: REPLACE(PhoneNumber, ‘-‘, ‘’). This is saying that if I find a dash, I want to replace it with an empty string. What’s really cool is that I can nest them. So, I can tell it to remove the dashes, then the spaces, then the open parenthesis, and finally the close parenthesis in one statement.

Bottom line: Sometimes things just have to get done. The difference between an anti-hero and a true antagonist is that we anti-heroes know to go back and do things the right way as soon as we get a moment to breathe. In the meantime, don’t apologize for leaving behind a few unmarked graves when you need to get the job done. We’re anti-heroes. We have the scars to prove it.

T-SQL Tuesday #14 – Audrey’s (Career) Aspirations for 2011

Here we are again for another T-SQL Tuesday. This month’s event is being hosted by MidnightDBA (Blog|Twitter). The whole crazy concept is the brainchild of Adam Machanic (Blog|Twitter). If you’re interested in what this thing is about, check out this month’s invite.  A quick bit of gushing praise for the whole T-SQL Tuesday thing: For me, deciding what to write about is the hardest part of blogging. When someone tells me, “Hey Audrey. Write about this. And we promise a ton of people will come read about it”, well, you don’t have to tell me twice. I’m on it like the paparazzi at a celebutante convention.

The topic of this month’s event is “Resolutions”, which is, you know, totally apropos since it’s the first month of the year and all. Personally, I don’t make many resolutions, and they’re usually boring. For example, last year’s personal resolutions included: 1) Lose 10 pounds (I didn’t) 2) Drink more water (I did), and 3) Clear out clutter (Sort of, a little). I do establish career goals, and as my mama always said, “They aren’t real until you write them down”. She was a list-maker. She would make a list, and if she did something that wasn’t on the list, she’d add it just so she could cross it off. Watching her made me a list-maker too. It’s part of my morning routine to sit down and write down what I’m trying to accomplish that day. Yes, I said write. Like with pen and paper. I need the ritual. So it makes sense to get my to-do list together for 2011. Thanks to MidnightDBA for giving me a good excuse to really think about it.

Audrey’s (Career) Aspirations for 2011

1) Learn more about SQL Server internals. I’ll be reading Microsoft SQL Server 2008 Internals by Kalen Delaney (Blog|Twitter), Paul S. Randal (Blog|Twitter), Kimberly L. Tripp (Blog|Twitter), Conor Cunningham (Blog), and Adam Machanic (info in 1st paragraph). This book has been sitting on my shelf, glaring at me for not reading the entire thing for far too long. It’s time to go cover-to-cover, baby.

2) Learn more about Analytics, using SQL Server as well as other products. I want to get better at the UDM/cube/presentation portion of the BI Stack. I’m still figuring out the right approach for this aspiration. But, I can assure you that Project Crescent and BISM are somewhere on the plan.

TANGENT: When I hear Project Crescent, I immediately think “crescent roll”, and then I think “Crescent City” which is New Orleans, and then I think of beignets, because they are tiny bits of powdered sugar-dusted heaven. Then, I get a little homesick, because there aren’t many places around Atlanta where you can get a good beignet. I grew up near Houston, which is close enough to NOLA that the good food tended to bleed over into our part of Texas. They should have just named it Project Beignet to save me the time it takes to get through my stream of consciousness. :END TANGENT

3) Re-read The Data Warehouse Toolkit by Ralph Kimball and Margy Ross. Why? Well, I haven’t read it in a couple of years, and I have a build-out of a dimensional model on my plate. Reading this book again is like stretching before the big game. I’ll feel warmed-up and ready to go when the project really gets rolling.

4) Learn PowerShell (for SQL Server). You know, there are a lot of reasons for learning PowerShell, but the one that motivates me the most is this: My first programming language was Turbo Pascal 7.0 during my senior year of high school in 1994. (I’ll save you the math… I just turned 35. [sigh…]) My first RDBMS was Oracle 7.3 in 1996. Neither had snazzy GUI’s to help me limp along as I was learning. I love how my world has come full-circle. Here we are in 2011, and people are singing the praises of a non-GUI-based way to interact with SQL Server. I want in on the fun. AND… I’m tired of hearing from Aaron Nelson (Blog|Twitter) about how great it is and not knowing for myself.

5) Finish my certifications. I got my MCTS certs for SQL Server 2008 Business Intelligence and SQL Server 2008 Database Development late last year. I promised myself that I’d go on to the MCITP exams this year. This one has a timeline too… I’ll get both before the end of the summer. I’m toying with the idea of going after SQL Server 2008 Database Administration too. You know, because they’re there. And I’m so ridiculously competitive that the idea of leaving the tests untaken is bothering me.

6) Blog, and blog consistently. My fellow Datachix, Julie Smith (Blog|Twitter) and I promised each other that we’d each blog every other week. That’s 26 blog posts for me this year. I’ll go a step further and say that 20 will be technical posts. You’re not getting much fluff from me this year, my friends.

7) Speak. I will present more. I will learn to present virtually. I will rock the house with my awesome, well-prepared, techically relevant, and entertaining presentations.

8 ) Finally, I will be a great consultant. I already know how I’m spending 2011 from a client standpoint, and I’ll be working to make sure that they look back on 2011 and remember it as the year that they finally got their data straight, their processes together, and their analytics moving in the right direction.

So, in conclusion, my 2011 aspirations are: Read, Study, Read, Learn, Test, Blog, Speak, Rock. Easy-peasy, right? Whoo-boy! I’ve got to go. I need to get started.

Wait, before I go… My 2011 aspiration for all of you is that you have a wonderful, satisfying, and all-around kick-ass year. Make time for the things you love. Learn something new. Try something you’re scared of. Make some new friends. Go for a walk in the rain without an umbrella. Watch a sunset or two… and maybe a sunrise. Tell the people you care about how awesome they are. When presented with a great opportunity, go for it. When 2011-12-31 23:59:59.999 rolls around, you’ll be glad you did. I’m totally rooting for you!

Thanks for reading all the way through my list, and if you see me, ask me how I’m doing on it. There’s nothing quite like public accountability to keep me honest.

Rock on, my friends…

—Audrey

Adventures in MDX – Sets

Oy, Audrey has violated the Book of Bloggering! She failed to post on her designated Tuesday, and fellow Datachix Julie had to step in with back-to-back posts. (Reason #81 why I heart her.) In my defense, dear readers, I’m neck deep in work at a new client. And while the Book of Bloggering dictates the alternating schedule, the Book of BI Consulting Chapter 17, Verse 12 says: “Thou shalt keep thy client happy at all costs. Regardless of disgruntled users, tight deadlines, or processes in need of improvement, the BI Consultant shall deliver, and deliver well.” As much as I love to write blog posts, my Wine and Kindle budgets demand a regular paycheck.

So, with no further ado, I present to you my long-overdue post. It’ll be the second in what will hopefully become an MDX series. Today, we’ll be taking a look at Sets. Building upon the last post, Adventures in MDX – Tuples, we’re still getting a handle on the structure and concepts around querying a cube. But, I pinky-promise you; we WILL eventually begin writing some pretty darn cool queries.

As I’ve mentioned before, I’m just not good at MDX. No excuses… I’m just not. Rather than jumping in and memorizing functions and complex structures, I’m trying to train my over-saturated brain to comprehend how the data is structured, and subsequently, how to get said data out of the cube and into a result set. The first step was to understand Tuples. A quick recap: A Tuple is a data point – the intersection of all of the dimensions at a certain place. Imagine you have a nice, big, freshly baked cake. Maybe a chocolate layer cake with chocolate ganache. I’m just spit-balling… choose any flavor you like. This is our cube proxy. Anyway, stick a toothpick into that cake. The spot where the point of the toothpick stops: Tuple. It’s a single point in our cake, the intersection of eggs, flour, chocolate, sugar, butter, etc.

Now, take a knife and cut into the cake. You’ve defined a set. It’s a collection of Tuples. Cool, huh? Now, stop cutting! I have to get through a basic set before we cut (SELECT) a whole slice of data out of that decadent, delicious cake. As I mentioned before, I’m using the Microsoft SQL Server 2008 MDX Step-by-Step book as my primary resource. Much credit to these guys for their excellent tome.

The best way to illustrate a SET is to build up a SELECT statement in MDX. So, that’s just what we’re going to do.

Defining a set gives you a lot of power over the way that your result is presented to you as well as what’s included in it. In most cases, you’re going to define what’s on COLUMNS and ROWS, the first two of 128 possible axes you can define. I’d love to talk to the sadistic you-know-what at Microsoft that thought it would be funny to try to make my brain fry by encouraging me to even attempt to visualize how a result set would be presented on 128 different axes. It’s okay, though, somebody probably failed to tell him that SSMS only allows you to return two axes in a result set. If you try to define a third, PAGE, for those of you keeping track at home, you’ll get an error message instead of results. Ha! Take that, Mr. Microsoft Over-Achiever!

Anyway, let’s start building us a SELECT statement in Management Studio. First, make it as basic as possible:

SELECT
FROM [Adventure Works];

You get this:

Um, okay, that’s nice. 80 million dollars. That tells me… nothing useful. But, we have a syntactically correct MDX query, so I’m not complaining.

TANGENT:

By the way, what does that ~80 million represent? Reseller Sales Amount. Why? Because it’s the default measure for the Adventure Works cube. How do we know? Open up BIDS. Open the Analysis Services Database, Adventure Works. Open the Adventure Works cube, and go to the Cube Structure tab. Right-click on the Adventure Works cube in the Measures section (top-left corner) and select Properties. There’s a defaultmember property. It says Reseller Sales Amount. There you go.

END TANGENT

But, we can do better. Let’s define a set that will give us column headers:

SELECT
{
	 ([Sales Territory].[Sales Territory Country].[Australia])
	,([Sales Territory].[Sales Territory Country].[Canada])
	,([Sales Territory].[Sales Territory Country].[Germany])
	,([Sales Territory].[Sales Territory Country].[United Kingdom])
	,([Sales Territory].[Sales Territory Country].[United States])
} ON COLUMNS
FROM [Adventure Works];

That thing up there in the SELECT clause? A SET! Note that it’s enclosed in curly brackets ({}). Yes, I know they’re called braces. I call them curly brackets. More descriptive. Also note that each thing between the commas is a Tuple. Therefore, Collection of Tuples! There is an important, nay, vital rule that is being followed here: When I explicitly name a dimension in my tuple, each of the tuples in the set references the same hierarchy. Now, I don’t have to define the SAME LEVEL of the hierarchy in all of my tuples. I can do something like this:

SELECT
{
	 ([Sales Territory].[Sales Territory Country].[Australia])
	,([Sales Territory].[Sales Territory Country].[Canada])
	,([Sales Territory].[Sales Territory Country].[Germany])
	,([Sales Territory].[Sales Territory Country].[United Kingdom])
	,([Sales Territory].[Sales Territory Country].[United States])
	,([Sales Territory].[Sales Territory Country])
} ON COLUMNS
FROM [Adventure Works];

Cool. But, I can’t reference two different hierarchies from the Sales Territory dimension in one set. Check this out:

SELECT
{
	 ([Sales Territory].[Sales Territory Country].[Australia])
	,([Sales Territory].[Sales Territory Country].[Canada])
	,([Sales Territory].[Sales Territory Country].[Germany])
	,([Sales Territory].[Sales Territory Country].[United Kingdom])
	,([Sales Territory].[Sales Territory Country].[United States])
	,([Sales Territory].[Sales Territory Region].[Northeast])
} ON COLUMNS
FROM [Adventure Works];

Ooh, error. Back to the cake analogy… This would be sort of like starting to cut into the cake, and then picking up the knife and stabbing it into another part of the cake. You wouldn’t expect a clean slice, and the same goes for the query. It just doesn’t know how to pull this data back. By the same token, I can’t reference two different hierarchies either. Really, why would you do this to your lovely chocolate ganache, anyway?

Okay, there’s more we can do with these column headers that are being returned. We can define a more detailed tuple. Maybe I want to see why people bought products in Australia. Watch this:

SELECT
{
	 ([Sales Territory].[Sales Territory Country].[Australia], [Sales Reason].[Sales Reason].[Quality])
	,([Sales Territory].[Sales Territory Country].[Australia], [Sales Reason].[Sales Reason].[Price])
	,([Sales Territory].[Sales Territory Country].[Australia], [Sales Reason].[Sales Reason].[Magazine Advertisement])
	,([Sales Territory].[Sales Territory Country].[Australia], [Sales Reason].[Sales Reason].[Review])
	,([Sales Territory].[Sales Territory Country].[Australia], [Sales Reason].[Sales Reason].[Manufacturer])
} ON COLUMNS
FROM [Adventure Works];

Sweet. Remember the rules from the Tuple episode? When a tuple is defined, every single dimension is actually represented in the query, even if you don’t explicitly name it. It defines the tuple members used according to the Other Three Rules*: Default Member, then (All) Members, then First Member. Before, the Sales Reason dimension was accounted for, but it was using the (All) Members rule because a Default Member isn’t defined. This time around, we’re telling the query exactly which members from the Sales Reason dimension to return, as well as which order to return them in. I could go on. I could define this tuple out to my heart’s content. BUT, there is one big rule to follow: The Set requires that the dimensions are given in the same order in every tuple. The following query will return an error:

SELECT
{
	 ([Sales Reason].[Sales Reason].[Quality], [Sales Territory].[Sales Territory Country].[Australia])
	,([Sales Territory].[Sales Territory Country].[Australia], [Sales Reason].[Sales Reason].[Price])
	,([Sales Territory].[Sales Territory Country].[Australia], [Sales Reason].[Sales Reason].[Magazine Advertisement])
	,([Sales Territory].[Sales Territory Country].[Australia], [Sales Reason].[Sales Reason].[Review])
	,([Sales Territory].[Sales Territory Country].[Australia], [Sales Reason].[Sales Reason].[Manufacturer])
} ON COLUMNS
FROM [Adventure Works];

Again, this query is like stabbing your knife into the cake and expecting to come out with a beautiful slice. MDX likes clean cuts. So, it wants consistently defined tuples. Humor it.

Okay. Remember how I told you to quit after making the first cut into your cake? Go ahead, make the second cut. I’ll wait…… Oh good, you’re back. Hey, you have a little icing on your chin. Right there. No, there. To the left. There you go, got it. So, you cut twice (asked for two Sets) and ended up with a nice piece of cake (Data) didn’t you? Awesome. Let’s continue to wring the life out of this analogy and look at the MDX.

SELECT
{
	 ([Sales Territory].[Sales Territory Country].[Australia])
	,([Sales Territory].[Sales Territory Country].[Canada])
	,([Sales Territory].[Sales Territory Country].[Germany])
	,([Sales Territory].[Sales Territory Country].[United Kingdom])
	,([Sales Territory].[Sales Territory Country].[United States])
} ON COLUMNS
,
{
	 ([Date].[Calendar Year].[CY 2005])
	,([Date].[Calendar Year].[CY 2006])
	,([Date].[Calendar Year].[CY 2007])
} ON ROWS
FROM [Adventure Works];

Okay, so what’s this doing? Well, it’s saying, “Hey, MDX, I want you to go out and find the Reseller Sales Amount. Then, I want you to break it down for me. I want column headers that show the Countries I’ve specified. Then, I want row headers that show the years 2005 – 2007. Finally, I want the portion of the overall Reseller Sales Amount in a cell at the intersection of the Country and the Year.”

I said that we weren’t going to get into functions yet, but I do have one little thing I want to close with. The Members function. This is sort of like the “SELECT *” of MDX. You can tag a “.Members” onto the end of a [Dimension].[Hierarchy].[Level] reference (or even a [Dimension].[Hierarchy] reference) inside a tuple. I’m going to re-write the COLUMN set to return pretty much the same data, but with less carpal-tunnel syndrome.

SELECT
{
	 ([Sales Territory].[Sales Territory Country].Members)
} ON COLUMNS
,
{
	 ([Date].[Calendar Year].[CY 2005])
	,([Date].[Calendar Year].[CY 2006])
	,([Date].[Calendar Year].[CY 2007])
} ON ROWS
FROM [Adventure Works];

Check that out. It even gives us members we didn’t know to ask for, including an (All) Members summary. This function is great for a couple of reasons: 1) You don’t have to type so much. 2) If you don’t know all of the hierarchy members, you don’t have to go look them up.  And, if the members change down the road, you’re not slogging through MDX queries manually updating them. 

Alright, so maybe we’re not to awesome-ness yet, but you have to admit, not too shabby. There are about a bazillion other things you can do with these sets, and we’ll get to them. But for now, let’s take a break and enjoy the lovely piece of cake… er, data we’ve created.

Query on, my friends.

*Other Three Rules because the Three Rules are strictly reserved for references to Isaac Asimov’s I, Robot and the Foundation series. If you’ve only seen he movie, for the love of all that is good in this world, go read the books. While you’re at it, go read Starship Troopers by Robert Heinlein. That book got the short end of the movie stick too. Seriously, Denise Richards? Denise Richards?!? They should have made her shave her head to stay true to the story.

T-SQL Tuesday #13 – Make Nice with the Business

This month’s T-SQL Tuesday asks the following question:  What issues have you had in interacting with the business to get your job done?   First, much thanks to Steve Jones (Blog | Twitter) for hosting this month’s event. 

As the old joke goes, this job would be great if it weren’t for the users.  (insert rimshot here) Seriously folks, my job title has “Business” in it, so I’d better be able to figure out what the user is asking for.  Being a BI Consultant is one part developer, one part psychologist, one part archaeologist, and two parts translator.  If the user asks for a report, that’s great.  Actually, that’s more than I often get.  Sometimes I get vague, conflicting requests.  Sometimes I get requests that just confuse me. 

Take my Top Five Favorite requests from end users: 

5) “There’s this report that John used to produce in 1997.  It was great.  It took him 12 days to put it together, but it had everything I needed.  I want that, only sooner.”

It’s a shame that John checked himself into a mental facility in 1999 and is now spending his days creating lovely landscape paintings with non-toxic watercolor paint and taking meds on a rigorous schedule.  It’s also a shame that no one can remember what the report looked like, only that it was really good. 

4) “I want to slice and dice the data however I want.”

Not bad.  At least they’re referring to being able to filter the data in some way.  The scary part of this request:  “However I want”.  Do you want to slice the data by what color shirt you were wearing on that day?  Let’s narrow this down a bit. 

3) “I’m not sure what I want, but I’ll know it when I see it.” 

–Sobbing–  You’ll know it when you see it?  Okay.  Okay… let’s extend that deadline. 

2) “The data should be sexy.” 

Sexy?  Let’s define sexy.  I think beautifully-structured, properly normalized, and well-performing data is sexy.  Is this what you meant?  No?  Wow, I thought we were on the same page here. 

And my all time favorite user request….

1) “I want it to be like an iPhone.  You know, an Apple feel to it.”

What, you want the data to wear a black turtleneck?  You want to be able to swipe and pinch the data? 

Users, I love you.  You keep my mortgage paid and my kids in shoes.  Truly, if it weren’t for you, I’d be out of a job.  But, sometimes, you make for good happy hour stories. 

Just this past week, I ran into a situation that BI Consultant nightmares are made of.  Let me set the stage: 

I’ve been at a new client for about 3 weeks.  Let’s just say I’m not exactly the resident expert yet.  It’s a very large client, with a very challenging data environment.  It’s the beginning of the month, which means that end-of-month and some quarterly reports are due.  Most of these processes have not been automated yet.  Read:  We’re copying query results into Excel and e-mailing them.  The one guy who is the resident expert is on vacation.  I get an e-mail asking for a report that I’ve never seen before. 

I take a deep breath, gather myself, and respond, “Yes, I’ll get that to you.” 

I make a quick phone call to the guy who’s on vacation, get a bit of info, and create the queries to run the report.  I slap that data into Excel, e-mail it out, pay myself on the back, and go home.  Everything looks great from my end. 

Next morning, 7:24 AM, an e-mail is delivered to my inbox.  To paraphrase, it said, “I don’t trust these numbers.  There’s a huge variance in the 3Q numbers that we can’t explain.  I have a meeting at 9:30 about these results, and I’d like to definitively say whether they’re correct.” 

Crap.  I’m not even through my first cup of coffee yet.  I top off my coffee, and begin my investigation.  This is the archaeologist part of my job.  On the surface is a report that isn’t making sense to the business.  My job is to dig backwards until I either come up with an explanation or prove that the data is correct.  No easy task, considering that I honestly don’t know where much of this data is sourced from. 

Step 1:  Verify Your Own Work – First, I opened up the query I ran to produce the report.  Key point here.  I saved it.  I save everything.  My first move was to verify syntax.  Did I do something stupid like join a table to itself or create a funky WHERE condition?  Did I accidentally paste something into Excel improperly?  (Tangent:  This is why automation is a Good Thing.  Eliminates human error.)  Nope, All quiet on the Western Front. 

Step 2:  Verify the Data Load – This data was sourced from a report database that is populated via an SSIS package.  Luckily, the guy who wrote the package sits a few rows over from me.  I check in with him, and he confirms that nothing has changed since the last load.  I ask for the source files anyway so that I have some outlets for additional research. 

Meanwhile, my Key2 Consulting compadre, Josh Robinson (Blog), is doing something really cool to help me out… He pulls the data into Excel and fires up PowerPivot.  Using the graphing functionality he’s got with the tool, he can point out anomalies in the data by different dimensions to try to narrow down exactly where we’re seeing the suspect data.  I was writing manual PIVOT statements in T-SQL, which was much less efficient than what he was doing.  Lesson Learned:  PowerPivot ain’t just for end-users.  It’s a great diagnostic tool. 

Step 3:  Verify the Source Files – I take a look at the source files.  Ha!  There!  The source data has the same disparity that the business users are complaining about.  This is good news.  This is a lead.  Now, I just have to find out who created these files. 

Step 4:  Find the Source File Owner – I make some calls, do some checking, and voila!  I have a name and a phone number.  It’s a very large company, so he’s halfway across the country, but I’ll still be able to get in touch with him. 

Step 5:  Contact Source File Owner – I call the guy who creates the source file.  He doesn’t know me from Eve.  After the requisite introductions, I ask about the change in the data.  He responds, “Oh yeah, we changed the way we’re pulling this piece of data.  You should see a huge increase in the number of gizmos from this month to that month.”  I thank him profusely, and move on. 

Step 6:  Wrap it all up – I make a courtesy call to the woman who is probably drumming her fingers on the table waiting for the data.  Then, I write up an e-mail with our findings, and I send it out to anyone who might care.

Wait, you thought we were done?  No, we’re not done.  Let’s back up a bit.  Yes, we explained the questionable data.  But a good archaeologist knows to dig just a little bit more.  I ask the business users, “Hey, so that source data changed, and the change was applied to this month but not that month.  Are we okay with that?”  This lead to another round of conversations.  Ultimately, we decided to keep the data as-is and note the reason for the change.  The point is, if you want to try to make friends with your business users, you answer the questions they have, and then try to think of the ones they haven’t asked yet.  They’ll love you for it.

Adventures in MDX – Tuples

Personal Note: I wrote the bulk of this post last night. Before I read Chris Webb’s blog post about the future of SSAS and MDX. I almost didn’t post this after reading that. But you know what? Screw you guys, I’m learning MDX anyway. I hate the idea that Microsoft would potentially remove aspects of the BI stack because the learning curve is too high. If I can’t keep up, then put me out of a job. Don’t dumb-down the functionality. That being said, I think there’s a lot of value in understanding the language that accesses any data store. It forces you to think about the internal structure of what you’re working with, and therefore, I see value in learning MDX either way. (But really… I was so disheartened after reading about some of the coming changes. You MS peeps had better know what you’re doing!) On to the original post:

My favorite movie is My Fair Lady. I love Audrey Hepburn. I love the Pygmalion story. Quick aside: I used to tell people that my parents named me after her. They didn’t, and the true story is convoluted. My mom loved the name Audrey Dalton (my middle name is Dalton), which was the name of a movie star. My great-great grandmother was Dalton Harris, and she thought it would be cool to name me after her and the actress. Then she met my dad. His mom’s name was Audrey. (She went by her middle name, Geraldine, which I never understood… but I digress.) Anyway, when I was born, she told everyone that I was named for my paternal grandmother and my maternal great-great grandmother. When, secretly, I was just named after an actress with a name she liked. I’m glad she told me this. (Audrey Dalton was a total hottie.)

<–  Audrey Dalton, HOTTIE (courtesy of Ballybane Enterprise Centre  http://www.bbec.ie/blog/?p=708)

This week, I’ve decided to start digging into MDX. There are three reasons for this:

1) It’s PASS Summit week. While I’m not there, I’m trying to get into the spirit of things by learning something new.
2) I’m just not good at MDX. There is no excuse for this.
3) I’m gearing up for my MCITP exam in Business Intelligence 2008. I hear rumor that there are MDX questions.

So anyway, I feel a lot like Eliza Doolittle this week. If you’re not familiar with the story, she is the subject of a bet between Henry Higgins and Colonel Pickering. They bet that Prof. Higgins can’t pass her off as a Lady (with a capital “L”) in a year. She’s just a lowly flower girl, complete with cockney accent. In order to refine her, he has to teach her how to speak again. It’s her own language, but she has to learn how to use it in a totally unfamiliar way. Instead of saying, “In ‘artford, ‘ereford, and ‘ampshire, ‘urricanes ‘ardly h-ever ‘appen”, she has to learn to say, “In Hartford, Hereford, and Hampshire, hurricanes hardly ever happen”. (Swear to cheesus, I haven’t hit IMDB yet… I really love this movie) Same words, same meaning, totally different accent.

Rather than a flower girl trying to sound like a Lady, I’m a T-SQL girl trying to sound like an MDX Lady. Or something like that. You know what I mean. 

   <– T-SQL Flower Girl

To get started, I picked up Microsoft SQL Server 2008 MDX Step by Step (by Brian C. Smith, C. Ryan Clay, and Hitachi Consulting). I’m starting with the basics, so right now I’m in “SELECT * FROM” territory. Or, “SELECT FROM ” territory, since we’re talking MDX.

Transitioning from T-SQL to MDX is not easy. The syntax is just familiar enough to me to trip me up. I keep catching myself trying to equate a query against a cube to a query against a relational data store. It’s not the same, and it has been tough for me to wrap my head around it. But, “I washed my face and ‘ands before I come, I did”, so I think I’m ready to get started.

So far, I’ve learned about one important concept: Tuples. The point of this blog post is to force myself to regurgitate what I’ve learned, because to paraphrase something Jen McCown (Blog | Twitter) said the other day, you don’t really know something until you’ve taught it. True that. Please keep reading, but also read a book by an expert. I’ve been happy with the Step by Step book so far.

Wait… one more silly analogy. Writing T-SQL is a bit like cutting out paper dolls. It can be complex, but it’s just two dimensional space. Writing MDX is like chiseling a hole into a big rock at a specific point. It’s n-dimensional space. While a bit goofy, this visualization has really helped me draw a line between T-SQL and MDX.

Tuples (as Translated by Me)

A tuple is basically the identifying characteristics of a cell inside a cube. Really, a data point inside a cube. Say I have three dimensions, Actor, Movie, and Year. Say I have a Measure Group that includes Budget Amount. Say I wanted to find the cell, or data point, identifying the budget for the movie My Fair Lady starring Audrey Hepburn that came out in 1964. I’d look at the attribute-hierarchies Audrey Hepburn, My Fair Lady, and 1964. (Hey, I didn’t say it was a well designed cube!) Those identifying characteristics of the cell are the tuple, which would be formatted something like this in MDX;

(
[Actor].[Audrey Hepburn]
,[Movie].[My Fair Lady]
,[Year].[1964]
,[Measures].[Budget Amount]
)

Another way to look at it is in terms of math. I always swore that geometry and algebra were pointless in high school. Well, Mr. Smith, you were right. I’m about to talk axes. (axises? axii?) Each of my attribue-hierarchies and my measure group make up an axis within my cube. Don’t even try to visualize a 4-dimensional cube. I did, and it made my head hurt when I ran out of 3-dimensional space. Let’s label each axis:

[Actor].[Audrey Hepburn] = a1
[Movie].[My Fair Lady] = a2
[Year].[1964] = a3
[Measures].[Budget Amount] = a4

Now, if I want to identify the point that is the intersection, my notation would look something like this: (a1, a2, a3, a4). I also imagine four lines (in 2-dimensional space, all intersecing one another at one point. That point is my tuple.

The MDX syntax for my query looks like this:

SELECT
FROM [Pretend Movie Cube]
WHERE
(
[Actor].[Audrey Hepburn]
,[Movie].[My Fair Lady]
,[Year].[1964]
,[Measures].[Budget Amount]
);

It would return one value: $17,000,000

Some Key Points:

1) Every attribute-hierarchy gets an axis, NOT every Dimension. So, if I had two attribute-hierarchies within my Actor dimension, Audrey Hepburn and Rex Harrison, they all have an axis. I could actually reference the same Dimension multiple times like so:

(
[Actor].[Audrey Hepburn]
,[Actor].[Rex Harrison] <–Henry Higgins!
,[Movie].[My Fair Lady]
,[Year].[1964]
,[Measures].[Budget Amount]
);

2) Measures each get an axis. They are treated differently at design time, but for the purposes of seeking out that one cell or set of cells, it’s treated just the same as an attribute hierarchy.

3) Analysis Services allows you to be lazy. You can define what’s called a Partial Tuple, leaving out some axis references. But… it’s going to try to figure out where on that missing axis you were headed. It’s going to go in this order:
        1 – Default member (defined at design time)
        2 – (All) member –remember that Measures don’t have an (All) member
        3 – First member

Am I getting this?  Have I missed the boat?  Close, but no cigar?  Any other cliche suggesting I don’t know what I’m talking about?



T-SQL Tuesday: Why are DBA Skills Necessary? – A Datachix Perspective

I have a few confessions to make: I encourage my son to watch Phineas and Ferb because I secretly enjoy it. Jersey Shore is strangely entertaining. I claim that I don’t sing, but if you ever see me alone in my car on I-85, I’m probably wailing away. I read all four Twilight books. I have a crush on Nathan Fillion.  And… lean closer… I’m going to whisper this one: I’m not a DBA.

My resume would say otherwise. I can point to multiple job titles throughout my career that have the words “Database Administrator” in there somewhere. Guess what? I’ve never been a DBA, and if pressed, I’d call myself a Developer. Now, I’ve made peace with this. I love what I do, and I’m pretty good at it, even if it is difficult to explain to non-technical people the difference between being a DBA and a Developer. I think the confusion comes from the fact that there are so many job titles, and they’re all kind of vague. People assume that if you work with data you’re a DBA. They assume that we all have the same skill set. The truth is so much more complicated.

See, there is a LOT of specialization out there. I’m currently a BI Consultant. Before that, I was a Data Architect, doing BI work. Before that, I was a “DBA”, developing relational databases. But, at the end of the day, I’ve got a specialization. I model and develop databases, and I move data around. Over the past few years, I’ve started traversing the BI stack, but at the end of the day, I’m really just a Database Developer. The simple truth is that there are so many angles and disciplines within the data universe, you have to be specialized. To borrow a silly cliché, anything else would be drinking from the fire hose. You’d get soaking wet, and you’d probably still be a little thirsty.

So, to the point. Paul Randal raised some great questions for this T-SQL Tuesday, “Why are DBA Skills Necessary?” I want to focus on a sub-topic he suggested, “Should there be cross-over between developer skills and DBA skills?” On the surface, the answer is an obvious, “Yes!” It was for me too, until I actually sat back and thought about it. My new answer is, “Yes, to a point.” See, asking me, a non-DBA, to attempt to be a DBA is like asking a psychiatrist to perform heart surgery. Yes, they both went to medical school, and yes, the psychiatrist could probably hold up his end of the conversation with a heart surgeon, but really, would you want him to cut on you? Probably not.

Here is my perspective: I need to be aware of what’s going on within the Database Administration universe. It’s why I attend both the Atlanta MDF, which is largely DBA-focused, and the BI User’s Group. I read blogs, books, and articles related to the entire data universe, not just my specialization. In fact, there are many, many things DBA’s do that I wish I knew better. I need to have a good vocabulary in the discipline, and I need to be able to carry on intelligent conversations with DBA’s. Does this mean that you should trust me to be your DBA? Probably not. By the same token, I also see application development in the same way. I should be able to discuss application development intelligently, but I don’t really know how to do it properly. Don’t get me wrong… I’m competitive enough that I want to be an expert at everything, but I’m also realistic to know that I’m never going to.

See, to lump us all into one “DBA” bucket is to diminish the amazing job that so many DBA’s do every day. To pretend that I could step into a server room and hold my own unassisted is not only arrogant and delusional, it is dangerous.

Which brings me to the next point…

“At what point does a SQL Server installation need a DBA to look after it?” Immediately! Friends, your DBA is not just the chick who does backups and restores. You know how developers complain about not being brought into the development cycle early enough? Like when requirements are being created? Well, how does the DBA feel? We pretend that we understand the DBA discipline, and muck around in our development environments, making wild assumptions based on test data sets. Then, just before going to production, we dump an entire environment on the DBA and ask her to make our hot mess of a product perform well. Even worse, we act like that annoying taxi passenger telling the driver to take the Connector instead of the Perimeter because its 5:17 on a Friday and we think we’ve got Atlanta traffic down to a science. Like I know more about how to get around the city than the guy who does it for a living? Right.

Good DBA’s, and I’ve been lucky enough to work with quite a few over the years, know as much or more about the business being served by the database as the rest of the development team. That’s a key point… The DBA is part of the development team. Business requirements translate into administrative requirements, so get that DBA in the room early and often! And if you’re lucky, by hanging out with the DBA more, you’ll learn a thing or two about her side of the world.

Getting Schooled on Dynamic Pivot… Or, PIVOT Part 2

A note: I’m reposting this because I accidentally deleted it from WordPress. Because I’m an idiot.

I wrote a post about Overcoming my Fear of Pivot. With my newfound confidence, I decided to tackle dynamic pivots. This is a common scenario where you need to PIVOT, but you don’t know exactly what you’re going to end up with. Basically, you want to allow all of the possible column headers to come back with the aggregated data you need.

If you’re not familiar with PIVOT, go back and read the original post. If I’ve done my job properly, it should make sense. So, here’s what I did… I resisted the urge to hit Google to find a solution to the dynamic pivot problem. I opened SSMS and said, “Self, you’re under a deadline. Write it and see if you can get it to work all by your lonesome”. 45 minutes later, I had a working script that produced some cool real-world output, if I do say so myself.

Then, I hit Google. Then I saw Itzik Ben-Gan’s solution. My first response was, “Crap!” Actually, it was a much less ladylike expletive than that. The solution was… Beautiful. Elegant. Blew my method out of the water. You know how athletes have muscle memory? Well, developers have it too. We fall back to what’s comfortable and familiar. Sort of like our own version of T-SQL sweatpants and chocolate ice cream. Before I start in on the comparison of my solution and Itzik’s, let me say this: His is so much better than mine. Did I mention that it was elegant? And beautiful? But you know what? In a real development environment, with deadlines and giant to-do lists, I would have fallen back to my own comfort zone. I know this. I also know that next time I need to write a dynamic PIVOT, I’m going to know how to use his method.

Authors, when asked to give advice to aspiring writers, always say the same thing. “Write what you know.” For us IT Folk, there’s a corollary. “Write what you know. Hit the deadline. Then, go learn a better way.” Am I proud that I figured a solution out on my own? Yup. Am I a bit deflated that I didn’t come up with the same solution as Itzik Ben-Gan? Nope. Come on, it’s Itzik.

Personal note: I hate when I run across someone else’s T-SQL and ask them, “How does this work?”, and their response is, “I don’t know, I found it on a blog post/Google/forum.” Peeps, this is unacceptable. Don’t copy and paste until you understand what you’re seeing. Because someday you’re going to have to maintain that pilfered bit of code. If you don’t know what it does, then don’t use it. Comprehend your own code. We all borrow from the experts, but make sure you can explain it in 50 words or less. If you can’t, then back away from the Ctrl+V. Stretch your skills, learn new things, just don’t jeopardize a project by jumping the gun.

Okay, enough commentary. On to the solutions. The trick in a dyamic PIVOT is to create a string that has all of the column headers you need. This is where he and I diverged wildly. I fell back on a WHILE Loop over a set of rows contained in a table variable, he used the STUFF function with a FOR XML Path() query output. I wrote my solution to address the same example from BOL that I ranted about in my first post. I modified his solution to produce the same output, and to clean out some unused variables that were in the sample I found. I’ve also resisted the urge to make little tweaks to my script after doing some extra research. Truly, I want to make the point that there’s what works… and what works beautifully.

My solution:

SET NOCOUNT ON;

DECLARE @vEmployeeIDTable as TABLE
(
EmployeeID varchar(20) NOT NULL
,ProcessedFlag bit NOT NULL DEFAULT(0)
)

DECLARE @vEmployeeID varchar(20)
DECLARE @vSQLString varchar(max) = ”
DECLARE @vEmployeeIDSELECT varchar(max) = ”
DECLARE @vEmployeeIDFOR varchar(max) = ”
DECLARE @vLoopCounter varchar(50) = 1

INSERT INTO @vEmployeeIDTable(EmployeeID)
SELECT DISTINCT EmployeeID
FROM Purchasing.PurchaseOrderHeader;

WHILE (SELECT count(ProcessedFlag) FROM @vEmployeeIDTable WHERE ProcessedFlag = 0) > 0
BEGIN

SELECT @vEmployeeID = ‘[‘+cast(MIN(EmployeeID) as varchar(20)) +’]’
FROM @vEmployeeIDTable
WHERE ProcessedFlag = 0

SET @vEmployeeIDSELECT = @vEmployeeIDSELECT + @vEmployeeID + ‘ as Emp’+@vLoopCounter+’,’
SET @vEmployeeIDFOR = @vEmployeeIDFOR + @vEmployeeID +’,’

UPDATE @vEmployeeIDTable
SET ProcessedFlag = 1
WHERE EmployeeID = cast(substring(@vEmployeeID,2, LEN(@vEmployeeID)-2) as int)

SET @vLoopCounter = @vLoopCounter + 1

END

SET @vEmployeeIDSELECT = SUBSTRING(@vEmployeeIDSELECT,1, len(@vEmployeeIDSELECT)-1)
SET @vEmployeeIDFOR = SUBSTRING(@vEmployeeIDFOR,1, len(@vEmployeeIDFOR)-1)

SET @vSQLString = ‘
SELECT VendorID, ‘+@vEmployeeIDSELECT +’
FROM
(SELECT PurchaseOrderID, EmployeeID, VendorID
FROM Purchasing.PurchaseOrderHeader) p
PIVOT
(
COUNT (PurchaseOrderID)
FOR EmployeeID IN
(‘+@vEmployeeIDFOR+’)
) AS pvt
ORDER BY pvt.VendorID; ‘

PRINT @vSQLString

EXECUTE (@vSQLString)

So, a quick rundown of what I did:

1) Create a table variable (@vEmployeeIDTable). Populate it with DISTINCT EmployeeID’s from Purchasing.PurchaseOrderHeader.
2) Declare the following variables:
a) @vEmployeeID – holds the EmployeeID I’m concatenating into the string during the WHILE loop
b) @vEmployeeIDSELECT – holds the EmployeeID string that I’ll use in the SELECT clause of my PIVOT. I separate this one out because I want to concatenate the column aliases just as they were in the BOL example.
c) @vEmployeeIDFOR – holds the EmployeeID string that I use in the FOR clause of my PIVOT. I don’t need column aliases here.
d) @vLoopCounter – holds a counter as I loop through the string concatenation. I use it to help name my column aliases (Emp1, Emp2…). The 1 and 2 are coming from this variable
3) While I have unprocessed rows in my table variable, I loop through with a WHILE
a) Set @vEmployeeID to the minimum EmployeeID that hasn’t been processed. I also concatenate on the brackets I need since these will become column names. (Those brackets were a pain. I kept having to work around them. Another place where Ben-Gan’s method was more elegant)
b) Set @vEmployeeIDSELECT to itself plus the EmployeeID being processed (@vEmployeeID), and then set up the alias. (as ‘Emp’+@vLoopCounter). Important note: I initialized the variable as an empty string (”). This is so that I’m not trying concatenate a NULL value to a string on the first go-round.
c) Set @vEmployeeIDFor to itself plus the EmployeeID being processed
d) Update @vEmployeeIDTable to indicate that the EmployeeID has been added to the string variables
e) Update @vLoopCounter so that the next table alias will be the next number
4) Clean up the extra commas at the end of the string variables
5) Put the whole thing together in @vSQLString
a) Place the @vEmployeeIDSELECT variable where it needs to go
b) Place the @vEmployeeIDFOR variable where it needs to go
6) Execute the variable @vSQLString

This is the output:


Okay, not bad. Now, the elegant Itzik Ben-Gan solution:

DECLARE
@cols AS NVARCHAR(MAX),
@sql AS NVARCHAR(MAX);

SET @cols = STUFF(
(SELECT N’,’ + QUOTENAME(EmployeeID) AS [text()]
FROM (SELECT DISTINCT EmployeeID FROM Purchasing.PurchaseOrderHeader) AS Y
ORDER BY EmployeeID
FOR XML PATH(”)),
1, 1, N”);

SET @sql = N’SELECT ‘+@cols +’
FROM (SELECT VendorID, EmployeeID, PurchaseOrderID
FROM Purchasing.PurchaseOrderHeader) AS D
PIVOT(COUNT(PurchaseOrderID)
FOR EmployeeID IN(‘ + @cols + N’)) AS P
ORDER BY P.VendorID;’;

PRINT @SQL

EXEC sp_executesql @sql;
GO

I know, right? Elegant. So what did he do?

1) Declared a couple of variables
a) @cols – holds the string of column values for the PIVOT
b) @sql – holds the SQL statment that gets executed
2) Used a FOR XML PATH(”) command to concatenate the string. This is cool. The query pulls EmployeeID’s out of a derived table in the FROM Clause. He orders by EmployeeID (which is not required), and outputs the result of this query using FOR XML PATH(”). The FOR XML PATH(”) clause creates a single row that looks like this:

,[250],[251],[252],[253],[254],[255],[256],[257],[258],[259],[260],[261]

Wow, exactly what we need for the PIVOT. Well, almost. That’s what the STUFF function is for. Getting rid of “almost”.

3) Also, see how he used QUOTENAME to add the brackets he needed?

QUOTENAME(EmployeeID) AS [text()]

4) Then, since that leading comma (,[250]) is not needed, he uses the STUFF command to strip it off. STUFF looks like this:

STUFF ( character_expression , start , length ,character_expression )

a) character_expression – the results of the query containing the FOR XML PATH(”) output
b) start – first character
c) length – how many characters to replace with what we’re “stuffing” in. In this case, a length of 1.
d) character_expression – an empty string, which is what’s’ “stuffed” into the first character expression, eliminating the comma.

Try this to illustrate it much more simply:

SELECT STUFF(‘abcdef’, 1, 1, ”);

Your result is: ‘bcdef’. The empty string he specified basically replaces the first character which is the comma we don’t want. Seriously, I had to run the baby STUFF to understand it properly. The beauty of STUFF over SUBSTRING is that SUBSTRING requires you to tell the function the length of the resulting string, which would require a LEN function over the entire subquery to get it right. It saves you having to execute that bad boy more than once.

5) Finally, he just puts the PIVOT query into @sql, concatenating in @cols where he needs to, and then executes it.

This is his output:

So he didn’t do pretty column aliases, but the important data is the same. And just take a look at the execution plans. That’s where I do feel just a bit deflated. Mine is monstrous. His? TWO queries. TWO! But that’s not the point. The point is, I had a blast figuring out how to write my own dynamic PIVOT. I had even more fun dissecting Itzik Ben-Gan’s method. (Yeah, I know. I’m a dork.) And, you can bet your sweet bippy that I’ll be working to make sure that FOR XML PATH, STUFF, and QUOTENAME all become part of my T-SQL muscle memory.

On Overcoming My Fear of PIVOT

I’m intimidated by PIVOT.  I’ve had a heck of a time wrapping my head around it, which is shameful, because Junior Accountants have been making pivot charts in Excel for years.  They get it, so why can’t I?  Well, I’ve got a few theories, mostly related to my occasional fear of unfamiliar things, and of feeling dumb.  Anyway, I finally got into a situation where I couldn’t avoid it, and I had to dig in there and learn it.  Nothing like a deadline to make you act like a proper student. 

I went to BOL, and looked it up.  Now, I’m a fan of Books Online.  It saves my tush daily.  But in this case… I’m sorry, but the explanation is nonsensical.  I mean, I read it, and what I comprehend is, “blah, blah, PIVOT, blah, you’re an idiot, Audrey, just give up now”. 

So, being forced to use a PIVOT, I had to break it down into chunks that my tiny brain could consume.  So, first, let’s look at the BOL syntax: 

SELECT <non-pivoted column>,

    [first pivoted column] AS <column name>,

    [second pivoted column] AS <column name>,

    …

    [last pivoted column] AS <column name>

FROM

    (<SELECT query that produces the data>)

    AS <alias for the source query>

PIVOT

(

    <aggregation function>(<column being aggregated>)

FOR

[<column that contains the values that will become column headers>]

    IN ( [first pivoted column], [second pivoted column],

    … [last pivoted column])

) AS <alias for the pivot table>

<optional ORDER BY clause>;
Hoo-kay.  I’m going to step you through my process of understanding this so I could construct my own PIVOT.  I’m even going to use the complex pivot example from BOL, the AdventureWorks2008 database.  We’re going in this order:  FROM, PIVOT, FOR, SELECT. 

But first, some rules.  There are always rules: 

RULES: 
1) You have to know how many columns you’re going to end up with after the PIVOT.  This means that this operation is great for things like months in a year, not so great for a varying number of pivoted columns.  You can tell it which columns to return, but the bottom line is you need to know what your output should look like.  If you want to break this rule, you’re writing dynamic SQL. 
2) You’re going to have to aggregate.  Even if you don’t really want to.  It’s required, but as always, there are ways to work the syntax.

THE BOL QUERY EXAMPLE: 

SELECT VendorID, [250] AS Emp1, [251] AS Emp2, [256] AS Emp3, [257] AS Emp4, [260] AS Emp5
FROM
(SELECT PurchaseOrderID, EmployeeID, VendorID
FROM Purchasing.PurchaseOrderHeader) p
PIVOT
(
COUNT (PurchaseOrderID)
FOR EmployeeID IN
( [250], [251], [256], [257], [260] )
) AS pvt
ORDER BY pvt.VendorID;

THE BOL QUERY OUTPUT: 

 

THE BREAKDOWN: 

1) FROM (Source Query):  This is the derived table that lives in the FROM clause.  It produces the data that is going to be aggregated and pivoted.  Write this first.  Get familiar with what data you’re working with.  Don’t forget to give it an alias.  I like the ever-creative “as SourceQuery” to help me remember what that derived table’s doing there in the first place. 

FROM

    (<SELECT query that produces the data>)

    AS <alias for the source query>
   
In the BOL example, this is the Source Query: 

FROM (
SELECT PurchaseOrderID, EmployeeID, VendorID
FROM Purchasing.PurchaseOrderHeader) as p

It returns this: 

This is our raw data.  By the time we get to the bottom of this blog post, we’re going to COUNT PurchaseOrderID’s by EmployeeID, set some EmployeeID’s as column headers, and return what looks like a cross-tab report with VendorID’s as row headers, EmployeeID’s as column headers, and PurchaseOrder COUNT as detail data.  Really.  I promise. 

2) PIVOT (Aggregation/Summarization):  This is where you’re saying how to aggregate, or summarize what will end up in the cells.  Think of it this way:  If this were a spreadsheet, with column headers and row headers, the data produced by the PIVOT clause is the detail data living in the cells.  Now, you don’t always want to aggregate.  Sometimes you don’t have anything to aggregate, you just want to flip your data from rows to columns.  Too bad.  You’re aggregating something.  The solution I’ve seen is to do a MIN or MAX, but to make sure that the MIN or MAX is of a unique thing.  You’ll have to examine your data to see what works for you.  But back to PIVOT…

PIVOT
(
<aggregation function>(<column being aggregated>)

In the BOL example, it looks like this: 

PIVOT
(
COUNT (PurchaseOrderID)

So, what it’s saying is that the “detail” data (think like you’re in Excel for a moment) should be the count of PurchaseOrderID’s.  Simple enough.  But where’s my GROUP BY?  It feels like heresy, aggregating something without a GROUP BY.  Hang in there…

3) FOR (Sort-of GROUP BY):  FOR establishes what will be column headers for the PIVOT-ed (aggregated) data.  One cool thing about it not being a true GROUP BY is that I don’t have to include everything from my Source Query (FROM).  If you look at the BOL example, VendorID from my Source Query (FROM) isn’t included in the PIVOT or FOR clauses.  It’s a pass-through column.  It’s going to be there in the SELECT, and therefore in the output, but it isn’t part of the PIVOT process.  In fact, you don’t have to include VendorID at all.  The data probably wouldn’t make sense, but to each his own, right? 

FOR

[<column that contains the values that will become column headers>]

    IN ( [first pivoted column], [second pivoted column],

    … [last pivoted column])

) AS <alias for the pivot table>

In the BOL example, the query developer chooses to return the number of purchase orders for a specific set of Employees.  Yes, in the example it’s arbitrary, because they return 5 and there are actually 12 distinct EmployeeID’s in the Purchasing.PurchaseOrderHeader table, but I’m not here to judge.  How do they do this?  Like this: 

FOR EmployeeID IN
( [250], [251], [256], [257], [260] )
) AS pvt

This is telling the PIVOT to produce 5 columns, [250], [251], [256], [257], and [260].  (You don’t have to have the brackets, except that “250” wouldn’t be a valid column name without them.)  Those numbers are the actual EmployeeID’s returned from the Source Query.  You’re saying “FOR” an EmployeeID “IN” a specific set of values that were returned in the Source Query (FROM).  You’re essentially establishing a GROUP BY on EmployeeID.  What’s being “grouped” by the FOR clause?  The data that you’re aggregating in the PIVOT clause.  Cool, huh?  The COUNT of PurchaseOrderID’s will be placed underneath the column corresponding to the EmployeeID it belongs to.  Don’t forget to alias the FOR clause.  Something like “IRockBecauseIFiguredThisOut” works well.  🙂 Also, this is where you’re going to close the parenthesis that you opened up in the FROM clause. 

Personal Note:  This clause is one of the reasons I hate this BOL example.  It doesn’t make sense that I would hard-code EmployeeID’s.  A PIVOT example with months or years or something would be a more likely real-world scenario.  Making it an example implies that it’s a good idea, and that every person reading BOL knows not to assume that Employee 257 will be a lifer at Adventure Works.  But like I said, I don’t judge. 

4) SELECT (Presentation):  Why is it that SELECT is always the simplest part of a query?  It seems so important, but it really doesn’t do much.  It’s like the presentation layer of the query.  Here, you’re telling the query what to output.  As long as it was part of the Source Query (FROM), or defined as a column header in the FOR clause, you can include it in the SELECT clause.  In fact, if you’re feeling frisky, you can leave off columns.  The query doesn’t care, because the SELECT is just there to make things pretty. 

SELECT <non-pivoted column>,

    [first pivoted column] AS <column name>,

    [second pivoted column] AS <column name>,

    …

    [last pivoted column] AS <column name>

In the BOL example, it looks like this: 

SELECT VendorID, [250] AS Emp1, [251] AS Emp2, [256] AS Emp3, [257] AS Emp4, [260] AS Emp5

VendorID is a pass-through (non-pivoted) column.  It’s there to supplement the PIVOTed data.  The other columns are the ones we established in the FOR clause.  Just remember that everything you want to work with needs to be included in that Source Query (FROM clause). 

Putting it all together, it looks like this: 

SELECT VendorID, [250] AS Emp1, [251] AS Emp2, [256] AS Emp3, [257] AS Emp4, [260] AS Emp5
FROM
(SELECT PurchaseOrderID, EmployeeID, VendorID
FROM Purchasing.PurchaseOrderHeader) p
PIVOT
(
COUNT (PurchaseOrderID)
FOR EmployeeID IN
( [250], [251], [256], [257], [260] )
) AS pvt
ORDER BY pvt.VendorID;

The output looks like this: 

 

So there you have it.  A peek into my thought process as I worked to overcome my fear of PIVOT.  I’m good now.  I’ll still have to look up the syntax whenever I write it, but at least I won’t break out in to a cold sweat next time.  And next up for me… PIVOT with an unknown/dynamic number of output columns.  Woo-hoo!  Dynamic SQL! 

Query on, my friends.