Common Table Expressions (CTEs) make complex data summarization easy.
Common Table Expressions, or CTEs, are one of my favorite tools in SQL. Simply put, they allow you to encapsulate your business logic. I’ve written in the past about how to use CTEs to encapsulate data arithmetic. However, you can generalize that encapsulation further, and lately I’ve been doing a lot of that.
Data Analysis
One of my primary roles in our organization is data analysis. Whether it’s reviewing business process changes over time or identifying edge conditions in transactions, SQL is my primary tool for any number of activities. Whether you use the green-screen STRSQL utility or a PC client such a DBeaver (which I talk about in this article), SQL is a great way to dig into data and present it to the business.
What I find myself doing a lot is organizing data. This organization is based on data points in the ERP, and the categorization can get pretty involved. For example, I might need to group inventory by warehouse, where certain warehouses belong to a group. The groups may be somewhat ad hoc, so I have to figure out how to aggregate by that ad hoc grouping. Another case might be grouping by general ledger event types, maybe selecting costing separately from inventory movements. Yet another case may be when a field has a number of values, but you need to segregate by ranges. For example, you might want to accumulate two different totals based on whether an invoice has a zero or non-zero amount.
You may be saying to yourself that this is the job of a CASE statement. And indeed you would be correct. However, the problem with CASE statements comes when you try to group by them. It’s not impossible, certainly, but it does require a syntactical technique that makes it ungainly.
A Real Example
Rather than try to provide examples of made-up business data, I think I can present all of the issues using a built-in table from the IBM i itself. I actually use this when wearing one of my other hats, my system architecture hat. Let’s start with a very simple case.
SELECT * FROM TABLE(object_statistics('MTLIB', '*PGM'))
This is a very useful SQL query, and one that is the basis for a lot of my architecture analysis. The object_statistics table function is the SQL interface to the DSPOBJD API. It allows you to get a list of objects in a library (or in a group of libraries using special values like *LIBL). The statement as shown gives you a list of all the program (object type *PGM) in the library MYLIB. What can I do with that? Well, one thing I can do is see whether the programs have been used or not since they were created. The last_used_timestamp field contains, as its name suggests, the timestamp of the last time the object was used. But if I want to count the used and unused programs, I can’t just group by last_used_timestamp, because the timestamp is probably different for every program that was used. Instead, I’d want to group by a yes/no flag. Getting that flag is relatively easy:
SELECT CASE WHEN last_used_timestamp IS NULL THEN 'N' ELSE 'Y' END WasUsed
FROM TABLE(object_statistics('MTLIB', '*PGM'))
That’s pretty straightforward. If the timestamp is null, the WasUsed field is N; otherwise, it’s Y. However, if I want to get a count of each, I run up against one of the idiosyncrasies of SQL: if I want to group by a derived field, I must repeat the computation in the GROUP BY clause. So the statement ends up like this:
SELECT CASE WHEN last_used_timestamp IS NULL THEN 'N' ELSE 'Y' END WasUsed,
count(*) Cnt
FROM TABLE(object_statistics('CUSTOM8', '*PGM'))
GROUP BY CASE WHEN last_used_timestamp IS NULL THEN 'N' ELSE 'Y' END
WasUsed Count
N 1307
Y 1450
As you can see, it gives me a nice result, but, as I noted, the syntax is very ungainly. And the more conditions I add, the worse it gets. Let’s say I just want to add one more condition, the program model. The easiest way to determine the program model for a program is to see whether the object attribute has LE in it: that is, such as RPGLE or CLLE. But just take a look at what happens to the Select statement as we add that second CASE:
SELECT CASE WHEN last_used_timestamp IS NULL THEN 'N' ELSE 'Y' END WasUsed,
CASE WHEN objattribute LIKE '%LE%' THEN 'ILE' ELSE 'OPM' END Model,
count(*) Cnt
FROM TABLE(object_statistics('CUSTOM8', '*PGM'))
GROUP BY CASE WHEN last_used_timestamp IS NULL THEN 'N' ELSE 'Y' END,
CASE WHEN objattribute LIKE '%LE%' THEN 'ILE' ELSE 'OPM' END
ORDER BY 1 DESC, 2
So now I have two rather verbose derived values, both of which I have to duplicate, leaving me open to maintenance issues. The good news is that if I do make an editing mistake, the SQL validator will catch it, but it’s still a lot of work. And that’s finally where the CTE comes in.
WITH T1 AS (SELECT
CASE WHEN last_used_timestamp IS NULL THEN 'N' ELSE 'Y' END WasUsed,
CASE WHEN objattribute LIKE '%LE%' THEN 'ILE' ELSE 'OPM' END Model
FROM TABLE(object_statistics('CUSTOM8', '*PGM')))
SELECT WasUsed, Model, count(*) Cnt FROM T1
GROUP BY WasUsed, Model ORDER BY 1 DESC, 2
WasUsed Model Cnt
Y ILE 511
Y OPM 939
N ILE 426
N OPM 881
I create my CTE with the name T1, and in it I define all of my derived values. And the nice thing is that you can test the SELECT all by itself, just to make sure you’re getting the data you want. Once the CTE is debugged, then I put it in the WITH … AS clause, creating the CTE, and then subsequently I select those derived fields by name to group and order the results. Note that I can actually define the ORDER BY clause using the column positions. That works really well, but I have to be careful when I move the columns around in my SELECT statement. Obviously, there are pros and cons to this approach, and you can pick the one that best suits your needs.
Another Option
CTEs aren’t the only option for aggregating data using a derivation, particularly when you’re grouping together sets like the program model. You can also create a temporary table with each of the values, such as RPG or RPGLE, pair it with the model, and then JOIN to it. But since the CTE handles that condition as well as many others, it’s the one I lean on the most.
Hopefully, this will give you another way to look at your data.
LATEST COMMENTS
MC Press Online