Guidelines for Selecting PC Power Supply

For those of us who build our own PCs, selecting the power supply is an important decision, and its easy to over-estimate one’s needs. Below are some guidelines:

First of all, the power supply only draws the amount of power that the computer is consuming (minus the heat, maybe 20% in an 80% efficient power supply). There has been some confusion in online blogs where folks thought that a higher capacity power supply (in watts) would consume more electricity–not true.

I have read some articles from people that looked deeply into the underlying technologies that suggest the power supply be over-rated by maybe 50% over what it will actually be asked to provide. I believe the CPU and video card are the only big watt consumers in my systems.

I have also read, again from people getting down to the circuit topologies and part quality of power supplies, that there are a wide variety of quality levels available for a power supply of a given capacity, so it is worthwhile to buy reliability and stability by spending more than the minimum for a given watt level. What this has lead to me personally seems pretty mainstream in terms of the product space. I have three computers set up with corsair 500 watt 80+ efficiency ‘bronze’ rated power supplies–$62 American. I note at micro center, 400 watts is the smallest one they stock so its on the low end size-wise. In future builds I will likely spend more per watt to get even more quality–though I have not had any problems with this model and brand.

For my system, this is what I estimate for max power needs: xeon 1276 85 watts. GTX 1050 video card–75 watts max during gaming. I note the pci-e spec is 75 watts max delivered thru the pci-e slot. If a video card can draw more than that, then it has one or more sockets on it for power supply cables. Best I can tell, an SSD can draw maybe 2 watts and a hard disk 10. I have an SSD and an m2 and a spinning platter, so that would be 14 watts. So, this totals 174 watts. Add a few watts for mobo, ram, and usb devices very conservatively gives a max consumption of 200 watts. Less than half of a 500 watt power supply. I note the i7-8700k, considered to be a popular high end CPU for audio, is rated at 95 watts at standard clock speed.

So for audio recording and production, I imagine it would be an unusual system that really needs a 750 or 1500 watt power supply my recommendation is to spend more money on quality than excess power capacity. For overclocking, I don’t have the knowledge to speak to this–adding water pumps, tons of fans, and whatever more power the CPU draws when the clock speed is increased–lots of OC information on the interweb for that.

In any case (bad pun?), anyone assembling a system can run their own numbers and not simply guess on power supply sizing. If I have the numbers wrong, or am missing something, glad to be educated.

MSDB system tables: a Treasure Island for the intrepid administrator. Part 1, SQL Agent.

There are times when coding up a query using MSDB system tables or views can save an enormous amount of time for some administrative tasks. Now a thoughtful administrator could reasonably ask “Why would I want to query and update system tables? Surely Microsoft has provided tools to handle any tasks that one might need to perform.

My answer is that in computer years, MSDB and its supported functionality such as SQL Agent, date back to the Ming Dynasty. I don’t recall that SQL Agent has gained much functionality since SQL Server 2000—at least to the best of my recollection—and there does not appear to be much activity in this area on the horizon. So, the intrepid administrator’s kit bag of queries can be a wonderful thing once stocked with MSDB helpers.

So, given that moment when one faces the prospect of using Management Studio to right click over 100 Agent jobs in order to disable them, and some sort of script starts looking attractive, how does one proceed? There are two paths, and I normally follow both. First, there are lots of internet posts on just about any question one has. When I do an internet search, I always start the query with ‘sql server’ followed by the specifics. The second path is to get on a test system with Management Studio and start exploring the system tables and views—looking at names, and selecting top 1,000 rows to see what the data looks like. Let’s turn our attention now to how SQL Agent has been exposed to us in MSDB. Many of these objects may have started life as system tables, but evolved into system views. For querying, the distinction is not important, but when it comes to updating, of course it will be tables we want to target.

I will be using SQL Server 2017 for my examples here. Earlier releases may have slight name changes, but the functionality is very similar. I will indicate the high points such as join columns, and encourage the reader to do some selects from the tables to get their own ideas as they prepare data shopping lists for their own queries.

For agent, a great place to start is the table dbo.sysjobs. There is no primary key designated for this table, but I believe job_id performs that role and contains a guid-like value. The other columns expose much of what you will see when you use Management Studio to view the properties of a job. There is name, descripton, start step and others that may be of interest depending on the situation.

Job steps are implemented using dbo.sysjobsteps. It joins to job on job_id, and its unique key is job_id and step_id. A good way to get a feel for its data is to get the job_id from a job you are familiar with and do a select * for that job_id and order it by step_id. Now on to some example queries:

SELECT [name],SUSER_SNAME(owner_sid) AS owner
FROM msdb.dbo.sysjobs
where SUSER_SNAME(owner_sid) != ‘sa’

It’s a good practice to ensure that all system jobs are owned by SA. The query above lists non-compliant jobs. SUSER_SNAME is a function which gets the user name from the sid. Keep an eye out for functions like this which can eliminate the need to do a join to another system table in order to get the name for a value.Note the user of brackets—not strictly necessary for the query to work, but a reminder that many of the column names in system tables are SQL Server reserved words.

select [name],[description] from msdb.dbo.SysJobs
where [enabled] = 1
order by [name]

The query above filters on the enabled status of the job. I recently used this as part of a system migration to ensure that all jobs had been disabled on a server after its databases and jobs had been migrated to a new system. This can be changed into an update query to disable a large number of jobs, with perhaps a where clause based on job name.

SELECT
j.[name] as jobname
,js.[job_id]
,js.[step_id]
,js.[step_name]
,js.[subsystem]
,js.[command]

from msdb.dbo.sysjobs j

left join msdb.dbo.sysjobsteps js
on j.job_id = js.job_id

where j.[name] like ‘%backup%’
order by j.[name], step_id

The query above provides an example of joining the job and jobstep tables together. Note the use of a left join, as jobs can be set up with no steps.

SELECT
j.[name] AS [JobName]
,case
when j.notify_level_email = 0 then ‘Never’
when j.notify_level_email = 1 then ‘When job succeeds’
when j.notify_level_email = 2 then ‘When job fails’
when j.notify_level_email = 3 then ‘When job completes (success or fail)’
end as notify_level
,isnull(o.[name],’none’) as notify_operator_name
FROM dbo.sysjobs j
LEFT JOIN dbo.sysoperators o
ON j.notify_email_operator_id = o.id

The query above introduces the dbo.sysoperators table and shows how it can be joined to the jobs table to get information about job notifications. Notice the need to use a case statement to translate the notification integer into a meaningful description. The meaning of these values is found in the Microsoft documentation of the dbo.sysjobs table which can be found with an internet search.

use msdb
go
with a as (
SELECT
sj.[name] as jobName,
jh.step_id,
jh.step_name,
dbo.agent_datetime(jh.run_date,jh.run_time) as run_date,
jh.message as step_message,
ROW_NUMBER() over (partition by sj.name, jh.step_id
order by dbo.agent_datetime(jh.run_date,jh.run_time) desc) as run_seq,
run_status
FROM dbo.sysjobs sj

join dbo.sysjobhistory jh
on jh.job_id = sj.job_id

where jh.step_id = 0
)
select jobName,step_id,run_date,step_message
from a
where
run_seq = 1 and run_status = 0
order by jobName, step_id

The above query brings in the dbo.sysjobhistory table. This table contains job and step run history. One aspect of this table that is not intuitively obvious is that it uses job step 0 to contain the overall job run history. This query checks the job history to identify jobs there the most recent run has failed. This can be mailed to developers to indicate which jobs have failed and may need to be fixed.Note also the use of an SQL Server function, dbo.agent_datetime which takes the separate date and time columns in the job history and converts them into a datetime format.This is an undocumented function in SQL Server, so it may be removed in a future release. Were this to happen, one would need to write one’s own function to do the same sort of conversion.

I hope these examples and discussion help you harness the SQL Agent system tables when a tedious or automated task needs to be performed by a script.

A land title history of Oace Acres

PDF Images of Oace Acres Plat Maps. Thanks so much to Amanda and Jennifer at the Washington County Public Works Department for their help!

PDF: Oace Acres 1955

PDF: Oace Acres Second Addition 1956

PDF: Oace Acres Third Addition 1959

PDF: Oace Acres Fourth Addition 1969

Oace Acres Restrictive Covenants

From 1971 to 1978 I lived in an area in Lake Elmo, MN between lakes Olson, Demontreville, and Jane. The neighborhood had been mapped out and developed in stages, under the plat names of Oace Acres, Oace Acres Second thru Fifth Additions. The second plat is named ‘Oace acres second addition’. I believe this is better read as ‘Oace Acres second, addition’. What I am discussing here covers this inner area between the lakes and all the way to the end of Hidden Bay road, but does not extend to other neighborhoods that are considered part of the Tri-Lakes area such as lots on the ‘outer’ side of these lakes, so to speak. These were developed under different plats.

The process of filing a plat allows a developer to worth with local government to take an undivided parcel of land–often former farmland or woods–and divide it into residential lots. This process results in a plat map which is approved by local government and filed in the county real estate records as it defines lot boundaries and provides a convenient legal name for the lots. As is often the case, restrictive covenants were also filed and attached to the property. The purpose of these covenants was to ensure minimum levels of home quality and to preserve the natural surroundings of the wooded area.

The underlying land was granted to its first private owner from the federal government in 1854. These federal land grants are the first entry in the chain of title for most lands west of the areas of the east coast that were settled prior to the 1850s. In 1947, a plat called the ‘Three Lakes Farm Addition’ was filed covering an area where the northernmost boundary was along what became Windbreak Hill (road name), and extending down into the area where Deer Pond and Jack Pine Trail (road names) were placed. I don’t have any more information regarding the intents or ultimate use of this plat.

The first plat for Oace Acres was approved by the city of East Oakdale (presumably subsumed by Lake Elmo at some point) in September if 1955. It defined lots essentially on the ‘outside’ part of the curve of Hidden Bay Road, all of which lots were on Lake Olson, the channel between Olson and Demontreville, and on Lake Demontreville itself. Lot 1 is owned by the Carmelite Nuns, and I don’t think a dwelling was ever built on it.

Lot 13 was a large one, and faced the channel between the lakes and I believe it has been divided into 3 or 4 lots since the plat was filed.

In 1956, the Oaces initiated a judicial proceeding to adjudicate all property rights to the land underlying Oace Acres, and the land to the south what became Windbreak Hill. This placed the land into the Torrens Property records, and hence, Oace Acres receive a Torrens property certificate rather than the older abstract form of title. In Washington County property records, the judgement from this case is referred to as Torrens Case 334, and it granted Oaces title to the land, subject to property interests in the lots that had already been sold. Most homeowners of this plat purchased the land from the Oaces using contracts for deed.

The second plat filed was Oace Acres Second Addition, approved in October of 1956. It looks like Washington County had by then taken over the plat process from the cities. This plat created the remaining Hidden Bay lots that are north of an imaginary line extending east-west along Windbreak Hill. It included lots on the west end of Lake Jane, with lot 14 being set aside for recreational use for the landowners of this plat who did not get lake lots. The landlocked lots in this area were granted a lien on lot 14 for ‘boating and bathing purposes’ , with the leinholders liable in equal shares for the expenses of maintaining this lot.

Lots 24 and 25 were to the north of BirchBark Lane and were not developed until recent years. These are owned by the Carmelite Nuns. I understand that the houses on these lots are for the use of the Carmelite Nuns and the Jesuit Retreat House, both owning large land parcels bordering on the north east portion of Lake Demontreville.

Oace Acres Third Addition was approved in January of 1959 and covered the area to the east of Hidden Bay road and south of Windbreak Hill. It covered up Three Lakes Farm Addition, and I presume Three Lakes disappeared from a legal standpoint. The road that became Jack Pine Trail was called ‘Halfway Road’ in the plat.

Oace Acres Fourth Addition was approved in May of 1969 and defined lots on both sides of Hidden Bay road south of the Windbreak Trail intersection and running to the west end of Hidden Bay Road.

Oace Acres Fifth Addition was approved in 1971, I think, and covered the development done to the west of Deer Pond Road.

Some historical documents relating to the study of law

The other evening, my son Tony and I watched The Paper Chase, a melodramatic movie plotted over the course of a first year of law school at Harvard in the 1970s. Setting aside the childish outbursts in class by the protagonist and the pretentious enunciation of the contracts professor, Kingsfield, there were many elements familiar to my own first year of law school at a somewhat less prestigious university. Among them were the tensions caused by the fear of being called on randomly and without warming to ‘present’ a case, the study group and petty squabbling thereof, the grouping of students into those who choose to participate in classroom discussion and those that don’t, contract law, and not least among these, the 4 hour essay exam using Blue Books by which one’s grade for the entire year is determined.

I must confess that having watched the movie prior to attending law school, I copied the ritual of ditching the dorms and spending a weekend of intensive study in an off-campus motel. The movie also motivated me to fully study any readings assigned prior to the first day of class. In contrast to the movie, my own beloved Contracts professor, Bill Martin, was a gentle soul, not given to theatrics. In an official nod to the movie, during an orientation lecture one of the professors archly recited the ‘Skull full of mush’ lecture, much to our amusement.

So, I dug through my old files and scanned a few interesting tidbits. First, is my LSAT answer sheet and score. I don’t have the questions, regrettably.

LSAT Answer Sheet and Score

I thought it would be amusing to take a look at textbook price inflation since the fall of 1982. Since all first year students in a section (our class was divided into three sections each of which took the same classes the entire first year) had the same textbooks, the school was kind enough to present us with our foot and a half tall pile of books, pre-selected and stacked, and a bill for the princely sum of over $400, if I recall correctly. As you can see, I was in section 1. All of the classes were full year except for Criminal Law which was a semester long and was replaced by a semester of Property Law in the spring. One thing new to me in law school was the practice professors had of referring to the textbooks by the last name of the author. I think there were a couple of reasons for this. First, we might have multiple textbooks, and second, the professor might actually know or have met the authors of the book. So for instance in Torts class, the professor might say “Please turn your attention to the footnotes on page 127 of Prosser.” It also struck me as pretentious, so I promptly adopted this practice in my own conversation.

First Year Law Books

And since Tony is taking exams around this time of year, I included my exam schedule. While the only final grade at mid-term was Criminal Law, we worked pretty hard to master the other subjects as well. For the year long classes, the mid term weighed 1/3 of the grade or less, with there being some suspicion that some professors ignored mid-term results completely as one’s knowledge at end of course was the only thing of interest. Looking at this at the beginning of school, I thought December was going to be a breeze, with an apparently leisurely exam schedule. In fact, I prepared myself intensely and was able to place myself in an extremely focussed state for each exam. During the exam weeks, I thought longingly of how relieved I would feel after exams were over. But after the last exam, I was so fried, that I was incapable of maintaining any sort of feeling whatsoever. In retrospect a sub-optimal practice, but I must confess to loosening the bonds of mental discipline by imbibing alcoholic beverages for a short period of days, and in adopting this practice I was by no means unique among my class.

Exam Schedule, December 1982

Respectfully submitted, Mark C. Knutson

SQL Server Analytical Ranking Functions

There are times when the expressive power of the SQL Server Windowing or Analytical Ranking functions is almost breathtaking. Itzik Ben-Gan observed: “The concept the Over clause represents is profound, and in my eyes this clause is the single most powerful feature in the standard SQL language.”

The other day, I solved a potentially complex query problem in an elegant manner using some of the newer SQL Server functions including row_number, dense_rank, sum, and lag. Before basking in the glow of this remarkable query, let’s take a brief tour of the some of these functions. What I hope to instill here is some familiarity with these functions and some potentially unexpected uses for them, so that when you are developing a query, you may find situations where they provide a simpler and more performant result.

Perhaps the simplest to understand, and certainly one of the most frequently used windowing functions is row_number. It first appeared in SQL Server 2005, and I end up finding a use for it in almost all my non-trivial queries. Conceptually it does two things. First, it partitions the result set based on the values in zero to many columns defined in the query. Second, it returns a numeric sequence number for each row within each subset from the partition, based on ordering criterial defined in the query.

If no partitioning clause is present, the entire result set is treated as a single partition. But the power of the function really shows when multiple partitions are needed. My most frequent use of sequence numbers in multiple partitions is to find the last item on a list, frequently the ordering criteria is time and the query finds the most recent item within a partition. Examples of this are: getting an employee’s current position, and getting the most recent shipment for an order.

The partition definition is given with a list of columns, and semantically the values in the columns are ‘and-ed’ together or combined using a set intersection operation—that is, a partition consists of the subset of query rows where all partition columns contain the same value.

The ordering criteria consists of columns that are not part of the partition criteria. The ordering for each column can be defined as ascending or descending. If the values in the columns defined for the ordering criteria, when ‘and-ed’ together do not yield unique values for all rows within a partition, row numbers are assigned arbitrarily to rows with duplicate values. To make the results deterministic, that is, yield the same result for each query execution, it is necessary to include additional columns in the ordering clause to ensure uniqueness. Such extra columns are referred to as ‘tie-breakers’. One reliable ‘uniqueifier’ is an identity column, if the table has one. In the example below, I show an imaginary employee database and create row numbers that show both the first and last position per employee.

As in the example, I often generate the row number within a Common Table Expression (CTE), and refer to it in subsequent queries.

Among the ranking functions, second in frequency of use when I am query-writing is the dense_rank function (although rank could be used as well). I used to think that if I wasn’t writing queries for a school calculating class rank, I had no use for the ranking functions. The general power of this function became apparent to me when I began to see other query problems in terms of ranking. For instance, as a means of assigning numbers to partitions of a set, and then using those numbers as unique identifiers for each partition.

I will note that using the result of an arithmetic function as an identifier is a not immediately intuitive concept that can really generalize the power of the windowing functions.

Rank is defined as the cardinality of the set of lesser values plus one. Dense rank is the cardinality of the set of distinct lesser values. When using these values as identifiers, either function will work—I prefer dense rank for perhaps no reason other than the aesthetic value of seeing the values increase sequentially. While these definitions are mathematically precise, I believe looking at an example query result will make the difference between the functions intuitively clear.

I found the syntax of the ranking functions confusing initially because I was using the rank to logically partition query results, but the partitioning criteria for this in the order by clause rather than a partition clause. The ranking functions do provide a partition by clause, as with row_number, whereby the ranking would be within each defined partition.

Analogous to creating sequential row numbers within a partition is the ability add a Partition by and Over By clause to the Sum aggregate, creating a running total. In fact, summing the constant value 1 for will yield a result identical to row_number. This capability is essential to solving the query problem solved in the second example. Though not a part of this query, when a partition clause is used for Sum, but not an ordering clause, each row of the result set contains a total for the partition which is useful for calculating percent of total for each row.

Without getting into details, the SQL Server Development Team implemented these functions such that they are generally far more performant than alternate ways of getting the same result using, which often involves correlated sub-queries. I view them, in some respects, as ‘in line subqueries’.

A short example demonstrating these functions is shown below. Let’s talk about the data for the example. We have a table containing manufacturing steps for product orders. A given order is specified uniquely by the 3-tuple of order number, sequence number, and division.

Each order in this table lists manufacturing steps involved in preparing the order for sale. Each step is uniquely specified within the order with an op­eration number, an arbitrary number, the sequence of which matches the order the manufacturing operations are to be performed. I have included an operation description for each operation simply to give an idea of what said operations would be like in this fictitious company. In the example, I used some coloring to visually indicate how the sample data is partitioned based on a combination of column values.

RankingFunctionsExample1

Given data organized as above, there is a re­quest to partition the processing steps for an order such that all operations sequentially performed at a work center are grouped together. Said groupings will be referred to as Operation Sequences. To better demonstrate boundary conditions, I have added a bit more data to the table for the second example.

One potential use for such Operation Sequences would be to sum up the time an order spends at each workstation.

The first step in this approach is to identify which Operations involve the work-in-progress arriving at a new workstation. In the unlikely event that one order ends at a given workstation and the next order starts at that same one, we need to identify changes in Order Id as well. To do this, the Lag function, introduced in SQL Server 2012, provides a compact approach.

By emitting a one for each changed row, a running total, using the Sum function with the over clause, yields a unique identifier for each Operation Se­quence.

RankingFunctionsExample2

For a fuller treatment of the Ranking/Windowing functions, I recommend Itzik Ben-Gan’s book SQL Server 2012 T-SQL using Windowing Functions. If you want to shorten your queries and speed them up, I recommend you get comfortable with the Ranking/Windowing functions, and begin to tap their enormous potential.