Menu

Tree [ea00bd] master / db-sqlite /
 History

HTTPS access


File Date Author Commit
 Printers.ps1 2019-05-29 theitguys theitguys [00796e] setting up new repo
 ReadMe.txt 2019-05-29 theitguys theitguys [00796e] setting up new repo
 SQLite-TheDefinitiveGuide.pdf 2019-05-29 theitguys theitguys [00796e] setting up new repo

Read Me

#
# ReadMe file for Printers.PS1
#
A while ago, when doing the post on using drop files for scripts to cooperate with each other on an endpoint, I mentioned that the real fun would start when I had SQLite on each endpoint. Well, the first template is done, and only cursorily tested. 

SQLite

For those not familiar, SQLite is a complete RDBMS (Relational Database Management System) that runs not as a service like MSSQL, Oracle, MySQL, etc. but as a single file. Like MS Access, you open it up, do your thing, and close the connection. You point your scripts to a file, not a socket/connector/TCP/IP port. It can run under almost any operating system & language, and the files can sit on any file system. You get your basic CRUD (SELECT, ALTER, UPDATE, DELETE, DROP, and all the other bells & whistles you'd expect. I seek to create tables under the database (file) to hold detailed events and reports on the endpoints, allowing us to zip & email, hit APIs, or create event log entries for Ninja and other tools to slurp up. As we may never get [near] real time detailed events from our endpoints through Ninja (I've already discussed the architectural "why"), this may be a way to help fill this significant gap with the leaders-at least that is my hope. If I can buy Ninja a little time, perhaps they will come through for us.

I created a single script that will maintain a database table on the endpoint with the currently configured printers. It isn't that printers are hard or special, just had to pick something as a template. The script is easily adaptable to any other on-endpoint information you need to keep track of, as well as computing statistics or creating alerts for different types of system changes. This script will add, update, and even prune the 'printers' table to keep it up to date with what is actually on the system. It does this with the printer's name and a chksum of concatenated table column values for each printer.

I'll go over just a few highlights:

Because this is initially written for NinjaRMM, defaults are to create a database called ninja.sqlite located under the .\scripting folder in the 'db' directory. It can be anywhere, really, but the default for Ninja is: C:\ProgramData\NinjaRMMAgent\scripting\ for running scripts. So this script will create and use a dir called 'db' under that one, unless you change in the script or command line options.

You can name it anything you want. Even the extension doesn't matter (unless you have a tool, like DB Browser for SQLite, tied to it's extension). In my case right now:

SELECT * FROM printers

...yields (in JSON;forgive lame zendesk editor 'formatting'):

[
 {
 "Hash": "58b6ed805a82caaec4e299a23ada2ec1",
 "LastUpdated": "2019-01-01 18:19:41",
 "Location": "210 Office",
 "Name": "ITG InkJet (HP OfficeJet Pro 6960)",
 "PrinterState": "128",
 "PrinterStatus": "1",
 "ShareName": "ITG InkJet (HP OfficeJet Pro 6960)",
 "SystemName": "PAT-WIN7ULT"
 },
 {
 "Hash": "181fa32efae7e05583cbcebc56d8e679",
 "LastUpdated": "2019-01-01 18:19:41",
 "Location": null,
 "Name": "Fax",
 "PrinterState": "0",
 "PrinterStatus": "3",
 "ShareName": null,
 "SystemName": "PAT-WIN7ULT"
 }
]

As you can see, just the most important stuff is captured, and a 'Hash' column is updated with the hash of the values in the meaningful columns, with the exception of the 'LastUpdated' column. If you want to see what you can actually query, check this out!

But it is trivial to add an enum matching codes to text for the following. Here I show the legend for the 2 non-self-explanatory columns, PrinterState & PrinterStatus, which in the table are just numbers:

---PrinterState---

Data type: uint32
Access type: Read-only
Qualifiers: Deprecated
One of the possible states relating to this printer. This property is obsolete. In place of this property, use PrinterStatus.
0 Idle - for more information, see the Remarks section below.
1 Paused
2 Error
3 Pending Deletion
4 Paper Jam
5 Paper Out
6 Manual Feed
7 Paper Problem
8 Offline
9 I/O Active
10 Busy
11 Printing
12 Output Bin Full
13 Not Available
14 Waiting
15 Processing
16 Initialization
17 Warming Up
18 Toner Low
19 No Toner
20 Page Punt
21 User Intervention Required
22 Out of Memory
23 Door Open
24 Server_Unknown
25 Power Save

---PrinterStatus---

Data type: uint16
Access type: Read-only
Qualifiers: MappingStrings ("MIB.IETF|Printer-MIB.hrPrinterStatus")
Status information for a printer that is different from information specified in the logical device Availability property.
This property is inherited from CIM_Printer.
Other (1)
Unknown (2)
Idle (3)
Printing (4)
Warmup (5) Warming Up
Stopped Printing (6)
Offline (7)

...anyway... 

[line 100] First it sees if the directory configured for the database to be in (by default or command line) exists. Based on that, it will create if needed.

[line 111] Check to see if the database file itself (default is 'ninja.sqlite') exists, and if not it creates it with the appropriate schema. FYI, doing a CREATE command will create the database and the table you CREATE'd automatically.

[line 135] Get a list of all printers' names in the printers table. If first run, it could be none.

[line 142] This is actually a long 1-liner, ending on line 235. It is asking WMI for a list of configured printers on the localhost, and then going through each returned via output piped to the ForEach-Object on line 143.

[line 159] Create the concatenated string whose checksum is calculated from in line 164.

[line 173] Add each printer to an array (ArrayList).

[line 174] Subtract (Remove) each found printer from the list of ones on the database. The idea here is that if there are any left, they need to be purged (Remove'd) after all configured printers are run through.

[line 178] Determine if a printer is in the printers table already, because if so we need to UPDATE that row if the chksums differ, which is determined on line 183.

[line 215] If not in the table already, and it has been found/discovered as configured, then it needs to be INSERT'd, which finishes on line 224.

That's really it!

[line 245] After the above is run for each found/configured printer, that array of printers to delete from above is gone through, with it deleted by name on line 246.

If debugging on (initial commit has it on by default, but only because I forgot to make default $False) it is very verbose-even for such a small script.

To test, keep deleting the file, and adding/removing test printers and re-running the script. No guarantees, so expect it to delete everything and be pleasantly surprised when it doesn't.

This is only one of a couple of ways to use tables on endpoints. Here are the most common ways to use a table:

1. As in the above, the table only has rows for the currently existing data (printers, hard drives, users, whatever). Rows are added, updated, and deleted to reflect current state.

2. Each row can store the current state of a CHANGED printer (or whatever) as a separate row. In other words, id adds a row any time a something is added, altered, or removed. Think of it as a king of change log, or audit. You would add more columns or things like the user who made the change, add a 'created' column to work with the LastUpdated one, etc.

3. Anything that happens in the desired subsystem is logged as in #2 above, except at periodic intervals the data is sent to somewhere, and optionally with confirmation of receipt, the sent rows are deleted-even as new ones are being added. This involves the concept of a 'high water mark'.

4. A table is used to collect and compute stats on other tables' data. Perhaps not just a table of a certain event log's event id 123, but also the rate (events/minute) they are being created at, stored similarly to example #1 above. Each row is a stat, with no dupe rows-just updates. There are many, many statistical functions built in. If you can imagine it, odds are it's already there as a core function-so don't write it until you are certain it is't already there.

5. Another kind of table keeps track of the last X things. This is for performance reasons. If you have to access every row in a 1,000,000 row table to sort & filter out the 10 most recent rows, it will hammer the endpoint. Easier to roll off the oldest and insert the newest if this kind of data is needed.

6. Indexes on one table can point to rows in another. So you can do things like query all the actions a specific user did on an endpoint by having an indexed 'userid' column that ties back to the audit log. 

7. A 'settings' kind of table is extremely useful. Think of it as perhaps the tabular equivalent of  Dictionary, Hash Table, or Associative Array, depending on the languages you are most familiar with. A basic example is 2 columns, one for Key and the other for that key's Value. You can store any settings for another script or program to pick up on. It can also be thought of as a much better version of the drop files I described in another post, in that you drop some setting (which will survive a reboot) and a script comes by later and retrieves it, conditionally doing something based on that setting''s value.

8. The last kind I will mention, and only briefly at that, are in-memory tables. They exist only in RAM, and are volatile, meaning they do not survive reboots. 

There are more, but these are the most common I can think of off the top of my head. Oh, and in the above, the assumption is that it is a pure RMM model, where things only happen each time the RMM tells a script to run, but this isn't the only scenario. I've written lots of windows services that can do so on triggers and/or the clock. IOW, you are not limited to just scripts-especially for events that are not logged in the Event Viewer. In such a case, you running a script each minute allows for something to go unnoticed during those other 58 seconds... :)

So like I said, the idea here is to template this for other endpoint stats to be collected, computed, acted upon, or even transmitted upstream. With the IoT sensors I communicate with via the serial ports on endpoints, recording data in a tabular (RDDMS) fashion is essential, as one example. Addressing Ninja's gaps with the leaders, the chosen architecture Ninja adopted greatly limits many of the things that client-based competitors excel at. 

If this concept can come to bear to alleviate some of the grief, and allow us to have a faster, near-real time flow of events that we each think is most important (think run/use what you need, not 1-size-fits-all), then it advances us all.

You can hit me up in the support tabs here if not urgent/important, or on our Ninja slack channel here: 

https://join.slack.com/t/ninjarmm-colab/shared_invite/enQtNDczNDQxMTE1MTM3LWY3ZjYwMTFhYzE0NjE3M2YzMDkxNGNhOTVjNzFkMjk1MGZmZjQzM2RiN2JjYWE2ZjVmNjg4MTQ3ZTJmMzVlODU


Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.