PS SQL Programming Guidelines 1.9
PS SQL Programming Guidelines 1.9
GE Fanuc Automation
Page 1 of 120
All rights reserved. No part of this publication may be reproduced in any form or by any electronic or mechanical means, including photocopying and recording, without permission in writing from GE Fanuc Automation. Disclaimer of Warranties and Liability The information contained in this manual is believed to be accurate and reliable. However, GE Fanuc Automation assumes no responsibilities for any errors, omissions or inaccuracies whatsoever. Without limiting the foregoing, GE Fanuc Automation disclaims any and all warranties, expressed or implied, including the warranty of merchantability and fitness for a particular purpose, with respect to the information contained in this manual and the equipment or software described herein. The entire risk as to the quality and performance of such information, equipment and software, is upon the buyer or user. GE Fanuc Automation shall not be liable for any damages, including special or consequential damages, arising out of the user of such information, equipment and software, even if GE Fanuc Automation has been advised in advance of the possibility of such damages. The user of the information contained in the manual and the software described herein is subject to the GE Fanuc Automation standard license agreement, which must be executed by the buyer or user before the use of such information, equipment or software. Notice GE Fanuc Automation reserves the right to make improvements to the products described in this publication at any time and without notice. 2007 GE Fanuc Automation. All rights reserved. Microsoft is a registered trademark of Microsoft Corporation. Any other trademarks herein are used solely for purposes of identifying compatibility with the products of GE Fanuc Automation. Proficy is a trademark of GE Fanuc Automation.
GE Fanuc Automation
Page 2 of 120
Table of Contents
1.0 Introduction...................................................................................................................................7
1.1 Purpose ...................................................................................................................................................7 1.2 Terminology ...........................................................................................................................................7 1.3 Contact....................................................................................................................................................7
4.2 NOLOCK..............................................................................................................................................14 4.3 Indexes ..................................................................................................................................................15 4.4 Joins.......................................................................................................................................................17 4.5 Transactions .........................................................................................................................................17 4.6 Temporary Tables................................................................................................................................18 4.7 Cursors..................................................................................................................................................18 4.8 Dynamic SQL .......................................................................................................................................19
4.8.01 Scope ............................................................................................................................................................20 4.8.02 Building Strings Within Strings ....................................................................................................................20 4.8.03 Output Values ...............................................................................................................................................21
GE Fanuc Automation
Page 3 of 120
5.4 Debug Messages ...................................................................................................................................34 5.5 Multilingual Support ...........................................................................................................................34 5.6 History Tables ......................................................................................................................................35 5.7 Column_Updated_Bitmask Field .......................................................................................................35
7.0 Calculation Stored Procedures...................................................................................................43 8.0 Event Model Stored Procedures .................................................................................................45
8.1 Models and Historian Tags .................................................................................................................45
8.1.01 Historian Data Query ....................................................................................................................................45 8.1.02 Multiple Trigger Tags ...................................................................................................................................46
8.2 Model Execution Multithreading and Order .................................................................................46 8.3 Error Messages ....................................................................................................................................47 8.4 SQL Historian ......................................................................................................................................48
GE Fanuc Automation
Page 4 of 120
11.5 Downtime............................................................................................................................................69
11.5.01 Querying Downtime Duration.....................................................................................................................70 11.5.02 Calculating Uptime .....................................................................................................................................70 11.5.03 Determining Primary and Split Records .....................................................................................................71 11.5.04 Querying Fault Selection.............................................................................................................................71 11.5.05 Querying Reason Selection .........................................................................................................................72 11.5.06 Querying Category Selection ......................................................................................................................72
11.8 Crew Schedule....................................................................................................................................75 11.9 Interfaces To External Systems........................................................................................................76 11.10 User-Defined Properties (UDP) ......................................................................................................76 11.11 Language...........................................................................................................................................79
11.11.01 Querying a Users Language.....................................................................................................................80 11.11.02 Querying Language Prompts and Overrides .............................................................................................81 11.11.03 Querying Global and Local Description ...................................................................................................82
12.0 Revision History ........................................................................................................................83 13.0 References .................................................................................................................................84 14.0 Appendix A: Result Sets...........................................................................................................85
14.1 Production Events..............................................................................................................................86
14.1.01 Example ......................................................................................................................................................86
GE Fanuc Automation
Page 5 of 120
14.5 Alarms.................................................................................................................................................95
14.5.01 Example ......................................................................................................................................................96
14.11 Genealogy Input Events ................................................................................................................109 14.12 Defects .............................................................................................................................................110 14.13 Output File .....................................................................................................................................112
14.13.01 Example ..................................................................................................................................................112
GE Fanuc Automation
Page 6 of 120
1.0 Introduction
1.1 Purpose
This document is intended for users who will be writing SQL code in any form or application against the Plant Applications database. The principles contained within this document are important to follow as they are the reflection of many users experience and are stated with the goal of avoiding bad performance and achieving a common framework for support. The contents of this document are for informational reference only and are not supported by GE Fanuc. GE Fanuc reserves the right to change the contents of this document at any time.
1.2 Terminology
Term MES PA ERP BOM Definition Manufacturing Execution System Plant Applications Enterprise Resource Planning Bill of Material
1.3 Contact
Any questions, comments or desired additions to this document should be forwarded to: Matthew Wells Project Team Leader GE Fanuc (905) 858-6555 [email protected]
GE Fanuc Automation
Page 7 of 120
2.1 Permissions
The SQL comxclient user has to be granted Execute permission for all stored procedures run by Proficy. This applies to all custom event and calculation stored procedures.
2.3 Deadlocks
Deadlocks can be avoided through the use of result sets and not modifying the database directly. An example situation of where deadlocks could occurs is a stored procedure that inserts a record into the Events table and then immediately attempts to update it. There is a trigger defined on the events table that upon record insertion also attempts to update it. Subsequently, there is a situation where 2 statements are simultaneously attempting to lock the record of update and it ends up in a deadlock.
GE Fanuc Automation
Page 8 of 120
3.1 General
The following should be taken into account when writing stored procedures: Stored procedures names should always begin with spLocal_. This identifies them to the Proficy Administrator as local custom stored procedures. By default, the Administrator will search for stored procedures starting with the spLocal_ prefix when the stored procedure button is clicked in the calculation or event model properties configuration dialogs. Never start a stored procedure name with sp_ as it tells SQL Server to look for it in the master database first, before the local database so there is a slight performance hit. All Transact-SQL reserved words (i.e. SELECT, FROM, WHERE, DECLARE, IF, ELSE, BEGIN, END, etc.) should be upper case. All SQL data types (i.e. int, float, datetime, etc.) should be lower case and declared variables and their data types should be listed vertically and tab indented. For example:
DECLARE @Condition @Action @Value1 @Value2 @Value3 int, int, float, datetime, varchar(25)
All SQL functions (i.e. datediff, ltrim, nullif, etc.) should be lower case. Variables should not contain any underscores (i.e. @MyNewVariable vs @My_New_Variable) Every permanent object referenced in a stored procedure should have dbo. in front of it, including the declaration of the stored procedure itself. This is essential to prevent unnecessary recompiles of the stored procedure. For example,
CREATE PROCEDURE dbo.spLocal_MyStoredProcedure
All temporary tables should be created together at the beginning of the stored procedure and then collectively dropped at the end of the stored procedure. This will prevent multiple recompiles within the stored procedure.
GE Fanuc Automation
Page 9 of 120
When simultaneously assigning multiple values to multiple variables, a single SELECT statement should be used instead of multiple SET or SELECT statements as there is a relatively significant performance advantage. For example,
SELECT @MyVariable1 @MyVariable2 @MyVariable3 = 5, = 6, =7
However, when assigning a single value to a single variable there is marginal performance advantage to using the SET statement and it is currently the recommended approach by Microsoft. For example,
SET @MyVariable1 =5
The default tab size in the SQL editor should be set to 4 characters.
There is no restriction on the size of the abbreviation or the number of characters to use for each clause. The following is a list of common types that can be reused:
Abbreviation PE ME UDE Calc Rpt SDK WEBS WEBD IF Description Production event model Movement event model User-definted event model Variable calculation Report Stored procedure called from SDK via ExecuteCommand or ExecuteSQL Stored procedure call from a web service Stored procedure called from a web dialog (i.e. asp page) Interface stored procedure
GE Fanuc Automation
Page 10 of 120
3.3 Comments
Every stored procedure should have a commented header which describes the basic functionality, author, calling applications, and change history. For example:
/* Stored Procedure: Author: Date Created: SP Type: Editor Tab Spacing: spLocal_RptMfgDaily Matt Wells (MSI) 04/23/02 Model 603 4
Description: ========= This procedure generates the data for a daily manufacturing report. CALLED BY: RptMfgDaily.xlt (Excel/VBA Template) Revision ======== 0.1 */ Date ===== 5/17/02 Who ==== MKW What ===== Added new production counter
There should be a tons of comments describing the purpose of each section of code and major sections should be divided by a sub-header. For example:
/********************************************************************************* * Section 1 * *********************************************************************************/
3.4 Queries
The following should be taken into account when writing queries: Primary query clauses should all be at the same indentation level Columns in any SELECT, INSERT, FETCH or VALUES statement should be listed vertically and indented. Any value assignments should also be indented together. Multiple conditions in a WHERE clause should be listed vertically and indented with the condition leading the line. Joins should be explicitly referenced using the JOIN clause as opposed to querying from multiple tables and joining in the WHERE clause. Joins should be indented and multiple join conditions should be listed vertically and indented with the condition leading the line in the same manner as a WHERE clause
For example,
SELECT @Value1 @Value2 @Value3 @Value4 FROM dbo.Table t = Column1, = Column2, = Column3, = Column4
GE Fanuc Automation
Page 11 of 120
INNER JOIN dbo.Table2 t2 ON WHERE Column1 = 5 AND Column2 IS NOT NULL OR Column3 LIKE Bob% ORDER BY Column 1 ASC
GE Fanuc Automation
Page 12 of 120
GE Fanuc Automation
Page 13 of 120
The use of EXISTS() in the above query is more efficient that using a COUNT() function.
IF (SELECT COUNT(*) FROM Events) > 0 PRINT 'yes' ELSE PRINT 'no' END IF
4.2 NOLOCK
NOLOCK is a table hint that should be used as much as possible as it improves the performance of queries and eliminates blocking through lock contention. Essentially, NOLOCK performs uncommitted reads, which can be detrimental in certain applications but typically not in Plant Applications. An uncommitted read means that the query is taking the data as is, even though the transaction creating/updating the data may not have completed. In a banking application this could have a serious impact as most reporting is done on the current balances. However, in Plant Applications, most reporting is done after on existing data that is not modified to a large degree (typically only manually).
GE Fanuc Automation
Page 14 of 120
For example, if a report summarizing downtime was run at 8:00:00 AM and the operator changed a fault assignement at exactly the same time as well, an uncommitted read would miss the fault change. However, since the fault change transaction only takes a few milliseconds, the window for this situation to occur is extremely miniscule. So, if the operator changed the fault at 8:00:01, then the report would have to be rerun anyways. The syntax for NOLOCK is as follows:
SELECT * FROM dbo.Timed_Event_Details ted WITH (NOLOCK) JOIN dbo.Timed_Event_Faults tef WITH (NOLOCK) ON ted.TEFault_Id = tef.TEFault_Id
The following website further describes the functionality and performance benefits of NOLOCK http://www.sqlservercentral.com/columnists/WFillis/2764.asp
4.3 Indexes
Use the table indexes! They are designed to facilitate fast retrieval of a subset of data so use them as much as possible. Generally, you should try to design your overall data retrieval strategy to take advantage of indexed queries as much as possible but it also often means that adding seemingly useless conditions to your WHERE clause can greatly speed up the execution of your query. The MSSQL query optimizer generally does the best optimization but it works within a set of parameters defined by the query itself. As such, it may seem like a black art but by adding extra clauses or fields in the result set, you can give query optimizer the opportunity to utilize an index in a situation where it normally wouldnt have. Each tables indexes can be viewed through the MSSQL Server Enterprise Manager and the Execution Plan displayed in query analyzer shows the actual usage of indexes in a particular query. The key is too look for the operation with the highest Query Percentage cost within a given execution plan, an then try to figure out how to improve that performance. A situation where high Query Percentage cost often occurs is when the table indexes arent properly utilized and too many rows are initially selected from a large table before they are filtered out by other means (i.e. a Join). This is shown selected in the Execution Plan by the high number of rows selected. Modification of the query can cause the query optimizer to use a different index and reduce the number of initial rows selected. In any query, it is also import to keep the WHERE clause as definite as possible and avoid too many OR statements that bring multiple unconnected columns into play. This may cause the Query Optimizer to become confused and then not use any indexes at all. The key to identifying this situation is to look at your WHERE clause and determine whether there are multiple conditions that could apply to the same record. For example: The following queries the Variables table looking for a specific string in the Extended_Info field, which is a non-indexed field.
GE Fanuc Automation
Page 15 of 120
SELECT @Unload_Date_Var_Id = Var_Id FROM Variables WHERE PU_Id = @PU_Id AND Extended_Info Like '%' + @Flag + '%'
This query can actually be made faster by included a search against the Var_Desc field, which is an indexed field. Peculiarly, to achieve the best performance this query needs the search string to be in a variable as opposed to a constant.
SELECT @Wildcard = % SELECT @Unload_Date_Var_Id = Var_Id FROM Variables WHERE PU_Id = @PU_Id AND Var_Desc Like @Wildcard AND Extended_Info Like '%' + @ Flag + '%'
Basically, you need to pick the indexes based on what you're putting into your WHERE clause or your JOIN statements. To properly design a database, you need to write the queries at the same time. In theory, you could create an index for every column or column combination but every index you create adds space to the database and ultimately affects performance. So there's a balance but unfortunately there's no rules about it. SQL provides an 'Index Tuning Wizard' which may be off assistance. Fundamentally, there are 3 choices you have to make.... 1) Primary Key - You should always have a unique primary key. This is the unique column or column combination within the table. In Plant Applications, this is almost always an identity field and, because of this, we usually make it a non-clustered index (as there's no point in having a clustered index on a single column with unique values). 2) Clustered Index - You can only have 1 clustered index and it should always be a multi-column index that is the most commonly selected key. For example, in the Plant Applications Events table, while Event_Id is the unique primary key, the clustered index is on PU_Id and TimeStamp b/c that is the most commonly selected combination. Clustered indexes act as a tree so they're very fast at retrieving data (i.e. they search for all PU_Id records first before drilling down to Timestamp). The order of the columns in the clustered index is important (as it is for any index). 3) Non-clustered indexes - You can have many of these. Typically they should be for commonly selected columns or column combinations other than the Clustered Index or Primary Key. The best way to choose your indexes is by writing the queries you need and figuring out what's the most commonly selected columns. As you test the queries you look at the Execution Plan in Query Analyser and see which indexes the query is using. You will see things like: a) Table Scan - this means the query is not using any index and is checking every single row for a match. In small tables this may not be a bad thing but since most table are large it's generally a very bad thing. As the table grows larger the query will take longer and performance will degrade. b) Index Scan - this means that the query is scanning the full index for the rows it wants. This is better than a table scan but still means the query has to check every single element. Because indexes are smaller than the actual table, Index Scans are much faster than Table Scans. As the table grows larger, the scan will take longer and performance will degrade.
GE Fanuc Automation
Page 16 of 120
c) Index Seek - this means the index is being fully utilized in the search. This is what you're aiming for and means that the query performance should remain stable as the table grows larger. As the table grows larger, performance should remain stable. Obviously, larger tables mean slower performance in all cases but the effect will be much less pronounced with Index Seeks.
4.4 Joins
Avoid the use of unnecessary Joins as they will impact the performance of a query. One of the obvious advantages of stored procedures is that you can break down your query into modular components and use variables instead of Joins. Dont exceed a maximum of 15 simultaneous joins as the query performance is significantly impacted. When writing an inner join, try to make the exclusions as one-sided as possible. This will vastly improve the performance of your query. Exclusions on both sides of an inner join can prove to be expensive. Also, never substitute a variable for a joinable field. In the following example the first join uses the variable @PUId instead of joining to the field in the Events table. This will cause the query to run much slower because it takes longer for the query engine to merge the rows together. For example,
FROM Events e JOIN Variables v ON v.PU_Id = @PUId AND v.Var_Desc = MyVariable
vs
FROM Events e JOIN Variables v ON v.PU_Id = e.PU_Id AND v.Var_Desc = MyVariable
When writing a query with Joins, put the table with the smallest number of rows last in the list and the table with the largest number of rows first.
4.5 Transactions
Avoid the use of SQL transactions: If you have to then make the SQL transaction as short as possible. SQL transactions help to ensure database integrity by performing all the actions at once. As such, if the required lock cannot be acquired, the whole processes stops until its released. This is a leading cause of blocking.
GE Fanuc Automation
Page 17 of 120
Single or multiple column clustered indexes can be created by declaring a PRIMARY KEY. Additional, non-clustered indexes can be created using the UNIQUE constraint keyword. For large datasets, temporary tables should be used. For good performance, temporary tables should always have a clustered index on them. One thing to remember about temporary tables is that they are declared globally and are available to any stored procedures called by the stored procedure that created the table. Any duplicate create statements in the called stored procedures will not generate any error messages and the original table will be used which, if unintended, can generate some unexpected results. Multiple temporary tables should always be created together at the beginning of a stored procedure to reduce recompiles. Temporary tables will be automatically dropped at the end of the stored procedure that created them but they should be explicitly dropped (using the DROP TABLE statement) as soon as they are no longer needed in order to free up system resources. When using temporary tables and/or table variables it is very important to ensure the table has an index. Lack of proper indexes is a leading cause of poor performance.
4.7 Cursors
Cursors have terrible performance and should never be used. They are expensive in terms of processing and also lock the entire dataset when in use. Furthermore, using a temporary table in a cursor is extremely bad because no other process will be able to create or drop temporary tables for the duration of the cursor as it prevents them from acquiring the necessary exclusive locks on the tempdb database. The processes will be forced to wait and will result in poor server performance. Also, referencing a temporary table in a cursor will force the stored procedure to recompile every time.
GE Fanuc Automation
Page 18 of 120
Most cursors are just used for looping through a dataset and performing other actions. A simple loop can easily be accomplished using the automatic increment functionality (i.e. IDENTITY) of a temp table or table variable instead. For example, instead of using the following cursor:
DECLARE CURSOR MyCursor FOR SELECT Field1 FROM DataTable ORDER BY Field1 ASC OPEN MyCursor FETCH NEXT FROM MyCursor INTO @Field1 WHILE @@FETCH_STATUS = 0 BEGIN -- process your data FETCH NEXT FROM MyCursor INTO @Field1 END
-- Insert data here in the order desired INSERT INTO @MyTable (Field1) SELECT Field1 FROM DataTable ORDER BY Field1 ASC -- Get the total number of rows SELECT @Rows = @@ROWCOUNT, @Row = 0 -- Loop through the rows in the table WHILE @Row < @Rows BEGIN SELECT @Row = @Row + 1 SELECT @Field1 = Field1 FROM @MyTable WHERE RowId = @Row -- Process your data END
GE Fanuc Automation
Page 19 of 120
From a performance perspective, one of the main issues with dynamic SQL relates to how SQL Server manages its execution plans. Every query and stored procedure run in SQL Server requires an execution plan, which basically represents the strategy SQL Server is using to search for and retrieve data. When the code is run for the first time, SQL Server builds an execution plan for it (i.e. it compiles the query) and the plan is saved in cache. The plan is reused until it's aged out or it is invalidated for some other reason like the query or stored procedure code is changed and this is where problems start to occur with dynamic SQL. Since dynamic SQL typically involves changing the structure of a query, the execution plan is not reused and must be recompiled each time. The time SQL Server takes to generate an execution plan can be significant so it can result in significant performance degradation. This performance issue can be partially alleviated through the use of the sp_execute() stored procedure as it allows the definition of parameters in the dynamic SQL which can reduce the amount of query modification and allow the plans to be reused. As such, if using dynamic SQL, it is especially important to use sp_execute() in place of the EXECUTE() statement. However, modifying the columns returned and/or the WHERE clause itself may still result in recompiles. Its a best practice to always use a variable to hold the SQL statement. For example,
DECLARE MyString nvarchar(4000) SELECT MyString = SELECT * FROM MyTable EXEC sp_executesql @MyString
4.8.01 Scope
When running EXECUTE() or sp_execute(), the SQL is executed within its own scope and doesnt inherit the scope of the calling stored procedure. This results in the following behaviour: Permissions are not inherited so the calling user must have direct permissions for all the objects involved. There are some options to address this in SQL 2005 (i.e. certificates and/or impersonation) but not in SQL 2000. No direct access to local variables or parameters of the calling stored procedure without passing them. Any USE statement in the dynamic SQL will not affect the calling stored procedure. Temp tables created in the dynamic SQL will not be accessible from the calling procedure since they are dropped when the dynamic SQL exits. The block of dynamic SQL can however access temp tables created by the calling procedure. If you issue a SET command in the dynamic SQL, the effect of the SET command lasts for the duration of the block of dynamic SQL only and does not affect the caller. The query plan for the stored procedure does not include the dynamic SQL. The block of dynamic SQL has a query plan of its own.
GE Fanuc Automation
Page 20 of 120
A common method is to use double quotes. If the SET QUOTED_IDENTIFIER option is set to off, double quotes can be used as a string delimiter. The default for this setting depends on context, but the preferred setting is ON, and it must be ON in order to use XQuery, indexed views and indexes on computed columns. However, SET commands within dynamic SQL only lasts for the block of dynamic SQL so SET QUOTED_IDENTIFIER can be referenced within the dynamic SQL. For example,
DECLARE @MyString varchar(4000) SELECT MyString = SET QUOTED_IDENTIFIER OFF SELECT * FROM MyTable WHERE Name = Jim EXEC(@sql)
It is not recommended to use double quotes outside of the dynamic SQL statement (i.e. in the calling stored procedure) as they are not supported by default in many SQL editors, which can lead to difficulties in supporting existing stored procedures. Another option is to use direct character references for the quote. While the double quotes may look easier to understand, the char(39) function provides the literal reference for the single quote and is supported by all editors. For example,
DECLARE @MyString varchar(4000) SELECT @MyString = SELECT * FROM MyTable WHERE Name = + char(39) + Jim + char(39) EXEC(@sql)
Lastly, the QUOTENAME() function can be used to return a string with quotes. For example,
DECLARE @MyString varchar(4000) SELECT @MyString = 'SELECT * FROM MyTable WHERE Name = ' + QUOTENAME('Jim','''') SELECT @MyString
4.9 SP Recompiles
SQL Server performs recompiles as it executes stored procedures in order to optimize them and, to a certain extent, they are a normal part of every databases ongoing operation. Recompiles are evaluated on a statement by statement basis as SQL Server executes a stored procedure, so the number of recompiles performed can easily, and unnecessarily, grow beyond the normal limit if attention is not paid to the way stored procedures are written.
GE Fanuc Automation
Page 21 of 120
When a stored procedure recompiles, it consumes significant system resources for the compilation process and, if done excessively, can impact server performance. In SQL 7.0 and 2000, the entire stored procedure is recompiled, regardless of which part of it caused the recompile, so the larger the procedure is, the greater the impact of recompilation is. Furthermore, while recompiling, SQL Server places a compile lock on all the objects referenced by the stored procedure, and when there are excessive recompiles, the database may experience blocking. Notably, in SQL 2005, only the statement in question will be recompiled. This statement level recompile functionality will significantly improve the performance impact of recompilation. Generally, if a stored procedure is recompiling every time it is executed, it should be rewritten to reduce the likelihood of it recompiling. In extreme cases, poor coding can result in a stored procedure recompiling multiple times within a single run. The following are things that will cause a stored procedure to recompile: 1. Dropping and recreating the stored procedure. 2. Using the WITH RECOMPILE clause in the CREATE PROCEDURE or the EXECUTE statement. 3. Running the sp_recompile system procedure against a table referenced by the stored procedure. 4. The stored procedure execution plan is dropped from the cache. Infrequently used procedures are aged by SQL Server and will be dropped from the cache if the cache memory is needed for other operations. 5. All copies of the execution plan in the cache are in use. 6. The procedure alternates between executing Data Definition Language (DDL) and Data Manipulation Language (DML) operations. When DDL operations (i.e. CREATE statements) are interleaved with DML operations (i.e. SELECT statement), the stored procedure will be recompiled every time it encounters a new DDL operation. 7. Changing the schema of a referenced object (i.e. using an ALTER TABLE or CREATE INDEX). This applies to both permanent and temporary tables. 8. When a stored procedure is compiled and optimized, it is done based on the statistics of the referenced tables at the time it is compiled. Each table in SQL Server (permanent or temporary) has a calculated recompilation threshold and if a large number of row modifications have been made and exceeds the threshold, then SQL Server will recompile the stored procedure to acquire the new statistics and optimize the procedure again. With respect to permanent tables, this is a normal part of database operations. 9. The following SET options are ON by default in SQL Server, and changing the state of these options will cause the stored procedure to recompile: SET ANSI_DEFAULTS SET ANSI_NULLS SET ANSI_PADDING
GE Fanuc Automation
Page 22 of 120
While there are not good workarounds for the first four SET options, the last one can be avoided, by using the ISNULL function. Using the ISNULL function and setting any data that might contain a NULL to an empty string will accomplish the same functionality. 10. The stored procedure performs certain operations on temporary tables such as: a. Declaration of temporary tables cause recompiles during the initial compilation. When a stored procedure is initially compiled, temporary tables do not exist so SQL Server will recompile after each temporary object is referenced for the first time. SQL Server will cache and reuse this execution plan the next time the procedure is called and the recompiles for this particular part of the stored procedure will go to zero. However, execution plans can be aged and dropped from the cache so periodically this may reoccur. b. Any DECLARE CURSOR statements whose SELECT statement references a temporary table will cause a recompile. c. Any time a temporary table is created within a control-of-flow statement (i.e. IF..ELSE or WHILE), a recompile will occur. d. Temporary tables have a global scope so they can be created in one stored procedure and then referenced in another. They can also be created using the EXECUTE() statement or with the sp_executesql() routine. However, if the temporary table has been created in a stored procedure (or EXECUTE() statement) other than the one currently referencing it, a recompile will occur every time the temporary table is referenced. e. Any statement containing the name of the temporary table that appears syntactically before the temporary table is created in the stored procedure will cause a recompile. f. Any statements that contain the name of a temporary table which appear syntactically after a DROP TABLE against the temporary table will cause the stored procedure to recompile.
The following practices should be followed to avoid and reduce the impact of recompiles: 1. Put dbo. in front of every permanent object referenced in the stored procedure. While this doesnt prevent recompiles, it will minimize the impact of the recompile by stopping SQL Server from placing a COMPILE lock on the procedure while it determines if all objects referenced in the code have the same owners as the objects in the current cached procedure plan. 2. Most recompile issues involve the use of temporary tables. As such, using table variables instead of temporary tables is the best way to avoid them. Table variables do not have recompilation threshold values, so recompilations do not occur because of changes in the number of rows.
GE Fanuc Automation
Page 23 of 120
3. Place all of the temporary table creation statements together. As mentioned above, during the initial compilation of a stored procedure, SQL Server will recompile each time a temporary table is referenced for the first time throughout the code. By placing them all together, SQL Server will create execution plans for all them at the same time in just one recompile. 4. Make all schema changes (such as index creation) right after your create table statements and before you reference any of the temporary tables. 5. Do not use a temporary table in the SELECT statement for a cursor. Furthermore, cursors should not be used at all. 6. Do not create a temporary table within a control-of-flow statement (i.e. IF..ELSE or WHILE). 7. Do not create a temporary table in an EXECUTE statement or using the system procedure sp_executesql. 8. Do not use a temporary table in a stored procedure other than the one the table was created in. 9. Do not reference a temporary table before it is created. 10. Do not reference a temporary table after it is dropped. 11. Execute SQL statements that are causing recompilation with sp_executesql. Statements using this method are not compiled as part of the stored procedure plan, but have their own plan created. When the stored procedure encounters a statement using sp_executesql, it is free to use one of the statement plans or create a new plan for that statement. Using sp_executesql is preferred to using the EXECUTE because it allows parameterization of the query. This should only be used for specific SQL queries that have been determined to be causing excessive recompiles. Alternatively, a sub-procedure could also be used to execute specific statements that are causing recompilation. While the sub-procedure would be recompiled, the size and scope of the subprocedure is much smaller and, as such, the impact greatly reduced. This mimics the statement level recompile functionality implemented in SQL Server 2005. 12. Do not use query hints such as KEEPFIXEDPLAN. Query hints are only effective in very specific situations where the same dataset is being returned. Using query hints without an indepth analysis of their effectiveness will generally result in slower performance.
4.10 NOLOCK
GE Fanuc Automation
Page 24 of 120
5.1.02 Numeric
Proficy variable values are stored in a varchar format. When selecting data from the tests table, care must be taken to ensure that the values are in the right format before converting the values. Using the isnumeric() function will allow you to verify the datas validity before converting. For example:
DECLARE @Result @Value varchar(25), real
SELECT @Result = Result FROM tests WHERE Test_Id = 123456 IF isnumeric(@Result) = 1 BEGIN SELECT @Value = convert(real, @Result) END
When dealing with integer values, it is often wise to convert the value to a real value before converting to an integer. Conversion of a string representation of a real value directly to an integer will produce an error. However, conversion of that value to a real first will not. This is especially important when summarizing integer values.
DECLARE @Value int SELECT @Value = convert(int, sum(convert(real, Result))) FROM tests WHERE Var_Id = 12345 AND Result_On > '2003-01-01 00:00:00' AND Result_On < '2003-01-01 00:00:00'
This can be especially relevant when writing certain model stored procedures (i.e. Model 603) as data stored in the historian may be retrieved as a real value when its actually an integer value (i.e. 1.0). That particular string value cannot be directly converted to an integer value. However, converting it first to a real or float will allow subsequent conversion to an integer.
GE Fanuc Automation
Page 25 of 120
You can then use that parameters in either Site_Parameters or as User parameters. The Field_Type_Id comes from ED_FieldTypes and the Parm_Type_Id will allow you to restrict the location of the parameters (ie. 0=All, 1=Site, 3=Site Users).
GE Fanuc Automation
Page 26 of 120
When a new model template has been created, the last tab allows for the definition of user-defined properties.
GE Fanuc Automation
Page 27 of 120
The properties are stored in the Event_Configuration_Properties table and can be accessed from the models stored procedure using the standard stored procedure spCmn_ModelParameterLookup provided in the Plant Applications database. The stored procedure takes the following arguments:
Argument @Value @ECId Description Value of the property returned from the procedure. The identity (EC_Id) of the event configuration record in the Event_Configuration table. Each time an event is added to a production unit, a record gets created in this table. For most generic models (i.e. 601, 603, 1052, etc), the EC_Id is passed into the model stored procedure by the EventMgr. Name of the user-defined property (i.e. MyCustomProperty). A default value that will be passed into the @Value field if the model property does not exist or has a value of NULL.
@PropertyName @DefaultValue
For example,
DECLARE @Value @ECId @PropertyName @DefaultValue @ECId @PropertyName @DefaultValue varchar(1000), int, varchar(255), varchar(1000) = 23, = 'MyCustomProperty', = 'Tinkerbell' @Value OUTPUT, @ECId,
SELECT
EXEC dbo.spCmn_ModelParameterLookup
GE Fanuc Automation
Page 28 of 120
@PropertyName, @DefaultValue
The properties can then be accessed from a stored procedure using the standard stored procedure spCmn_PUParameterLookup provided in the Plant Applications database. The stored procedure takes the following arguments:
Argument @Value @PUId Description Value of the property returned from the procedure. The Production Unit Id (PU_Id) of the record in the Prod_Units table. This id is unique for every Production Unit and is visible in the Administrator by rightclicking on the Production Line in the Plant Model and listing the units in the right-hand window pane. Name of the user-defined property (i.e. MyCustomProperty). A default value that will be passed into the @Value field if the model property does not exist or has a value of NULL.
@PropertyName @DefaultValue
For example,
DECLARE @Value @PUId varchar(1000), int,
GE Fanuc Automation
Page 29 of 120
varchar(255), varchar(1000) = 12, = 'MyCustomProperty', = 'Tinkerbell' @Value OUTPUT, @PUId, @PropertyName, @DefaultValue
EXEC dbo.spCmn_ModelParameterLookup
The Execution Path properties are stored in the same place as the Production Unit properties but there is no standard stored procedure to access them. As such, they must be queried directly from the tables.
Table Table_Fields Table_Fields_Values Description Definition of the properties including name and data type. The value of the property.
The Table_Fields_Values table has the following fields that must be referenced:
Field Value Table_Field_Id TableId Description Value of the property Id of the corresponding record in Table_Fields Id of the corresponding record in the Tables table. The value will always be 13 for accessing Execution Path properties as this corresponds to the PrdExec_Path table. For Execution Paths this will be the Path_Id in the PrdExec_Path table.
KeyId
GE Fanuc Automation
Page 30 of 120
For example,
DECLARE @FieldId @FieldDesc @PathId @Value @FieldDesc @PathId int, varchar(50), int, varchar(7000) = 'SAP Route', =1
SELECT
SELECT @FieldId = Table_Field_Id FROM Table_Fields WHERE Table_Field_Desc = @FieldDesc SELECT @Value = Value FROM Table_Fields_Values WHERE Table_Field_Id = @FieldId AND TableId = 13 AND KeyId = @PathId
-- Create the table variable to hold the result set data DECLARE @VariableResults TABLE ( ResultSetType Var_Id PU_Id User_Id Cancelled Result Result_On TransType Post_Update -- Gather the data SELECT @Var_Id @PU_Id = Var_Id, = PU_Id,
GE Fanuc Automation
Page 31 of 120
@Var_Precision = Var_Precision FROM Variables WHERE Var_Desc = 'MyVariable' -- Create the result set in the table variable INSERT INTO @VariableResults ( Var_Id, PU_Id, Result, Result_On) VALUES ( @Var_Id, @PU_Id, ltrim(str(1.234, 50, @Var_Precision)), convert(varchar(50), getdate(), 120)) -- Output the result set to the calling service SELECT ResultSetType, Var_Id, PU_Id, User_Id, Cancelled, Result, Result_On, TransType, Post_Update FROM @VariableResults
When the calling service (i.e. the Calculation Manager or the Event Manager), runs this stored procedure, it will see the following result set, translate it and then send it out on the Message Bus: 2, 53, 12, NULL, 0, 1.23, 2005-07-05 12:02:30, 1, 0 Alternatively, if the stored procedure was executed from Query Analyzer the above results would appear in the Results Pane but since it wasnt run by the Event Manager or the Calculation Manager, no message would be sent. It is very important that the structure of the result set is correct as improperly formatted result sets could crash the service while it attempts to translate it into a message. The services will attempt to translate the returned results of every SELECT statement in the stored procedure. In order to prevent this, table variables should be created for each result set type in the stored procedure to enforce the column structure. There are 20 different result set types:
Type 1 2 3 5 6 7 8 9 10 11 12 13 14 15 Description Production Events Variable Test Values Grade Change Downtime Events Alarms Sheet Columns User Defined Events Waste Events Event Details Genealogy Event Components Genealogy Input Events Defect Details Historian Read Production Plan
GE Fanuc Automation
Page 32 of 120
16 17 18 19 20 50
Production Setup Production Plan Starts Production Path Unit Starts Production Statistics Historian Write Output File
The standard stored procedure spServer_CmnShowResultSets provides the current list and formats of the result sets available. A more detail listing and description of the result sets is contained in the Appendix. Standard methodology for working with result sets is as follows: 1. Create a table variable for the result set 2. Gather the data for the result set 3. Check to see if the record already exists 4. Create the result set in the table variable 5. Output the results to the calling service Generally, you should always leave the User_Id as NULL in a result set. If its NULL it will be filled out by the service the stored procedure was called from (i.e. the EventMgr or the CalcMgr). However, the CalcMgr cannot overwrite a value entered by ComXClient or a Site User. If you need to overwrite a value entered by a user from a calculation, you can use a result set but the User_Id must be set to a specific Site User. This user must be a non-system custom user (i.e. User_Id > 50) and cannot be ComXClient either. Each result set has a pre-update and post-update option. When the result set has the pre-update option set, the message will be sent to both the Database Manager and any clients. Alternatively, when the post-update option is set, the message will only be sent to the clients and not the Database Manager. This post-update would be used if the data has been inserted directly into the database and there is no reason to send it to the Database Manager but the clients still need to be notified.
GE Fanuc Automation
Page 33 of 120
convert(varchar(25), @TimeStamp, 120), you will lose the seconds in your timestamps and your data will likely not show up where you expect it.
Type
Yes
int
Description Message to be stored. Usually the name of the calling stored procedure. Custom reference id that could correspond to a line number in the calling stored procedure. Timestamp of the message (defaults to the current server time). The associated connection out of the Client_Connections table. This is mainly used by external applications and is generally not useful for error messages coming from the CalcMgr. A value corresponding to the data in the Message_Types table. It defaults to a 0 (Undefined) but a 2 (Generic) would suffice as well.
For example:
IF @Value <= 0 BEGIN SELECT @Message =
'Error: Invalid value; Inputs:' + convert(varchar(25), @Argument1, 120) + ',' + convert(varchar(25), @Argument2, 120) SELECT @Stored_Procedure_Name = 'spLocal_CalcValue' EXEC spCmn_AddMessage @Message, @Stored_Procedure_Name END
GE Fanuc Automation
Page 34 of 120
GE Fanuc Automation
Page 35 of 120
The COLUMNS_UPDATED function returns a varbinary bit pattern with the bits in order from left to right, with the least significant bit being the leftmost. The leftmost bit represents the first column in the table; the next bit to the right represents the second column, and so on. COLUMNS_UPDATED returns multiple bytes if the table on which the trigger is created contains more than 8 columns, with the least significant byte being the leftmost. COLUMNS_UPDATED will return the TRUE value for all columns in INSERT actions because the columns have either explicit values or implicit (NULL) values inserted. The following function fnLocal_ColumnUpdated can be used to check whether a particular column in the record was updated. It accepts 2 arguments, the first being the bit pattern contained in the Column_Updated_Bitmask field and the second being the column number to check and returns a 1 the checked column was modified, and a 0 if it wasnt.
CREATE FUNCTION dbo.fnLocal_ColumnUpdated ( @COLUMNS_UPDATED @OP int) RETURNS int AS BEGIN DECLARE @POS @PRE @RESULT int, int, int binary(8),
SET @PRE = (@OP-1)/8 SET @POS = POWER(2, (@OP-1)) / POWER(2, @PRE*8) IF (SUBSTRING(@COLUMNS_UPDATED, @PRE+1, 1) & @POS <> 0) BEGIN SET @RESULT = 1 END ELSE BEGIN SET @RESULT = 0 END RETURN @RESULT END
It is important to note that the Column_Updated_Bitmask is stored in a varchar format. The field needs to be converted back to a binary format before the bitwise comparison can be performed. In the function above this is implicitly done by receiving the data in a binary(8) argument (@COLUMNS_UPDATED). The column order number is also not always constant from server to server as it depends on the order of the columns in the original SQL statement used to create the table. This has not remained static through the various versions of Plant Applications so a server that was originally installed with a previous version of the software (and then upgraded) may not have the same column order as a server that was installed with a later version. As such, the column order number must be dynamically determined by checking the table schema in the SQL Server system tables. For example,
DECLARE @EDDimXOP smallint
SELECT @EDDimXOP = ORDINAL_POSITION FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'Event_Details' AND COLUMN_NAME = 'Initial_Dimension_X' SELECT dbo.fnLocal_ColumnUpdated(convert(binary(8), ech.Column_Updated_Bitmask), @EDDimXOP) FROM Event_Detail_History WHERE Event_Id = @EventId ORDER BY Modified_On DESC
GE Fanuc Automation
Page 36 of 120
GE Fanuc Automation
Page 37 of 120
It is recommended that a stored procedure be referenced in the SQL historian fields instead of the SQL queries themselves as it is easier to manage than copying an entire SQL statement into the one line.
6.1 BrowseSQL
In addition to the Administrator, the BrowseSQL statement is used by all the historian services. The Administrator uses the BrowseSQL statement when querying for tags in the Tag Search function, while the services use it to validate the tag configured in the variable sheet or the model. BrowseSQL can accept the following arguments in the string:
Argument ?TagId Description If called from the Administrator, contains the search string entered by the user. If called from a service, contains the tag configured in the variable sheet or model.
GE Fanuc Automation
Page 38 of 120
6.2 DeleteSQL
I dont know what this is used for so if you have any thoughts on the matter, please let me know. DeleteSQL can accept the following arguments in the string:
Argument ?TagId ?TimeStamp Description The tag name. Timestamp of the data point to delete.
6.3 InsertSQL
This is used by the Write service to insert data into a historian. InsertSQL can accept the following arguments in the string:
Argument ?TagId Description If called from the Administrator, contains the search string entered by the user. If called from a service, contains the tag configured in the variable sheet or model.
6.4 ReadAfterSQL
ReadAfterSQL can accept the following arguments in the string:
Argument ?TagId Data Type varchar(1000) Description If called from the Administrator, contains the search string entered by the user. If called from a service, contains the tag configured in the variable sheet or model. Timestamp from which to query from. The number of values to be returned. An option that specifies whether to return a data point that is equal to the timestamp (i.e. data points >= ?TimeStamp), as opposed to data that is only greater than the timestamp.
?HonorRejects
int
GE Fanuc Automation
Page 39 of 120
SET ROWCOUNT @NumValues SELECT TimeStamp, Value, Good FROM MyHistorianTable WHERE Tag_Id = @TagId AND ( TimeStamp > @Timestamp OR ( TimeStamp = @Timestamp AND @IncludeExact = 1)) ORDER BY TimeStamp ASC
6.5 ReadBeforeSQL
ReadBeforeSQL can accept the following arguments in the string:
Argument ?TagId Data Type varchar(1000) Description If called from the Administrator, contains the search string entered by the user. If called from a service, contains the tag configured in the variable sheet or model. Timestamp from which to query from. The number of values to be returned. An option that specifies whether to return a data point that is equal to the timestamp (i.e. data points >= ?TimeStamp), as opposed to data that is only greater than the timestamp.
?HonorRejects
int
GE Fanuc Automation
Page 40 of 120
FROM MyHistorianTable WHERE Tag_Id = @TagId AND ( TimeStamp < @Timestamp OR ( TimeStamp = @Timestamp AND @IncludeExact = 1)) ORDER BY TimeStamp ASC
6.6 ReadBetweenSQL
ReadBetweenSQL can accept the following arguments in the string:
Argument ?TagId Data Type varchar(1000) Description If called from the Administrator, contains the search string entered by the user. If called from a service, contains the tag configured in the variable sheet or model. Start time from which to query from. End time from which to query up to. The number of values to be returned. An option that specifies whether to return a data point that is equal to the timestamp (i.e. data points >= ?TimeStamp), as opposed to data that is only greater than the timestamp. An option that specifies whether to return a data point that is equal to the timestamp (i.e. data points <= ?TimeStamp), as opposed to data that is only greater than the timestamp.
?IncludeExactEnd
int
?HonorRejects
int
GE Fanuc Automation
Page 41 of 120
GE Fanuc Automation
Page 42 of 120
If no value is explicitly assigned to the return value, a value of NULL will be returned. If the return value is NULL, the CalculationMgr service will still create/modify a record in the Tests table and set the result to
GE Fanuc Automation
Page 43 of 120
NULL. However, when the output value for a calculation is set to the text string DONOTHING, then the CalculationMgr will not create a record or modify an existing records value. This is an important consideration when implementing high volume calculations as it is desirable to prevent needless records from being inserted in the Tests table. The @OutputValue should always be set to a varchar(25) to match the format of the Result field in the tests table. The values returned from a calculation go directly into the tests table and are not post-formatted. As such, formatting fields in the Variable sheet (i.e. Precision) must be explicitly utilized in the stored procedure if necessary. For example,
DECLARE @Value @VarId @Precision @Value = 5.12345, @VarId = 2 float, int, int
SELECT
SELECT @Precision = Var_Precision FROM Variables WHERE Var_Id = @VarId SELECT @OutputValue = ltrim(str(@Value, 25, @Precision))
GE Fanuc Automation
Page 44 of 120
Initially, when the Event Manager service is started, the cached timestamp is calculated by subtracting the value of the StartupSetBack user parameter for the EventMgr system user from the current time. Similarly, if the Event Manager service is reloaded the cached timestamp is set to the timestamp specified by user in the Administrator Control Panel or, if no timestamp was specified, by subtracting the value of the ReloadSetBack user parameter for the EventMgr system user from the current time. For example, if the current time was 4:05 and Event Manager service was reloaded to 3:00, the service queries would look like the following:
Actual Time 4:05:00 4:05:10 4:05:20 4:05:30 4:05:40 4:05:50 Cached TimeStamp 3:00:00 3:00:00 3:00:00 4:05:12 4:05:12 4:05:12 Data Points Returned None None Value=5.37, Timestamp=4:05:12 None None Value=6.34, Timestamp=4:05:43 Value=6.94, Timestamp=4:05:44 Value=7.66, Timestamp=4:05:48 None
4:06:00
4:05:48
For each trigger tag data point retrieved, the Event Manager will then get the last good data point for all the other tags configured in the model based on the timestamp of the trigger tag data point. This set of data is then passed to the model for interpretation (i.e. a stored procedure attached to model 603).
GE Fanuc Automation
Page 45 of 120
GE Fanuc Automation
Page 46 of 120
SELECT @Signal = convert(int, convert(float, @EventNewValue)) IF @Signal > 0 BEGIN -- Create event END ELSE BEGIN
GE Fanuc Automation
Page 47 of 120
'Bad value (' + @EventNewValue + ') for EC_Id ' + convert(varchar(10), @EC_Id)
END
GE Fanuc Automation
Page 48 of 120
These parameters can then be extracted The arguments for the report should be the report name and then any user-selected parameters from the parameter web pages (i.e. report start time and end time). The other constant parameters defined in the report template are then extracted directly from the database using the standard stored procedure spCmn_GetReportParameterValue. For example,
CREATE PROCEDURE dbo.spLocal_RptData @RptName varchar(255), @RptStartDATETIME varchar(25), @RptEndDATETIME varchar(25)
GE Fanuc Automation
Page 49 of 120
AS DECLARE @RptTitle varchar(4000) @RptName, 'strRptTitle', 'My Report Title', @RptTitle OUTPUT -- Report name -- Parameter name -- Default value -- Actual value
GE Fanuc Automation
Page 50 of 120
The primary fields to look at are Scan Density and Logical Scan Fragmentation. Scan Density should be greater than 90% and Logical Scan Fragmentation should be less than 10%. If either is outside those ranges, then defragmentation or rebuilding the index should be considered. Rebuilding an index using the command DBCC REINDEX tends to produce better results but requires a full lock on the table while defragmenting an index using the command DBCC INDEXDEFRAG can run in the background without affecting ongoing activity in the table. As such, it is better to start with defragmentation and only consider rebuilding the index if the defragmentation is ineffective.
GE Fanuc Automation
Page 51 of 120
DBCC INDEXDEFRAG accepts 3 arguments, database name, table name, and index id. The index id can be retrieved from the DBCC SHOWCONTIG command described above and it must be run individually for each index in a table. For example,
DBCC INDEXDEFRAG (GBDB, User_Defined_Events, 1) DBCC INDEXDEFRAG (GBDB, User_Defined_Events, 2) etc
You should always start with the clustered index (which always has an index id of 1) as that affects the other indexes as well.
The sample percentage used in the last update can be calculated from the Rows Sampled and the Rows fields. This sample percentage can be used to gauge whether the table should be updated with a larger sample size. The Density number can also be useful but really only when comparing it to the density of another index or to the density of the same index after updating the statistics. A lower density number is better so a successful update of the statistics should result in a lower density number. The query optimizer has a very complex selection process but, generally, when its faced with a choice of indexes it will pick the one with the lowest density number. The UPDATE STATISTICS command can be used to manually update table statistics. For example,
UPDATE STATISTICS User_Defined_Events WITH FULLSCAN
GE Fanuc Automation
Page 52 of 120
Executing the above statement in Query Analyser will recalculate the statistics based on all the rows in the table (i.e. FULLSCAN). Once complete, the SHOW_STATISTICS command will show the current date for Updated, the Rows and Rows Sample columns should have the same value and the Density number may be lower. However, this is a resource intensive operation and on a table with 2 million rows, it could take up to 5 minutes to complete depending on table width, SQL configuration and hardware. It would be wise to test the update with lower percentages (i.e. initially to gauge the impact before committing to the FULLSCAN option. For example,
UPDATE STATISTICS User_Defined_Events WITH SAMPLE 25 PERCENT
10.3 Parallelism
Parallelism is when SQL Server utilizes multiple processors for executing a query. Normally, SQL Server will execute different parts of a query serially but if the query is costly enough, it will split it into different streams and then execute them in parallel. Plant Applications has a dual role, which is affected differently by parallelism. It is both an OnlineTransaction Processing (OLTP) applications and a reporting application at the same time. OLTP applications involve lots of small transactions where data is created, updated and deleted while reporting applications typically involved lots of large complex queries to extract and analyze data. For reporting applications, parallelism is typically a good thing as it improves the performance of large complex data queries. However, parallelism can be bad for Plant Applications as it consumes multiple processors for a single query, thereby impacting the performance of the rest of the server. This is generally a bad thing as the responsiveness of Plant Applications to operators and other systems is more important than the execution time of a report. Parallelism can also cause performance blocking of a server where a complex query consumes all the resources on a server and the normal transactional operations of the server cannot complete. Parallelism can be seen in the master..sysprocesses table. For a given spid, you will see multiple records, each with an incremental ecid and different kpid (WinNT process). The record with an ecid of 0 is the master thread that will contain key information about the query (i.e. sql_handle, stmt_start and stmt_end), which will help with diagnosis. The following query will show any current processes that are utilizing parallelism.
SELECT sp.spid, sp.ecid, sp.kpid, sp.* FROM master..sysprocesses sp WITH (NOLOCK) JOIN ( SELECT spid FROM master..sysprocesses WITH (NOLOCK) GROUP BY spid HAVING count(kpid) > 1) p ON sp.spid = p.SPID
Parallelism can be addressed in 3 different ways: 1. Optimize the query to run faster so it doesnt meet the minimum threshold for parallelism. This is the best way to address parallelism.
GE Fanuc Automation
Page 53 of 120
2. Use the MAXDOP option to restrict SQL Server to only use 1 processor. This is generally a safe option to use but will likely result in the query to run longer than without it which is often an acceptable tradeoff (i.e. if its part of a report). For example,
SELECT * FROM Tests WHERE Result_On > @RptStartTime OPTION (MAXDOP 1)
3. Modify the SQL Server settings to either restrict the number of processors available to parallelism, raise the minimum query threshold or disable parallelism altogether. This is the same idea as the MAXDOP query option but with a server-wide scale. These settings should be thoroughly tested before implementing in a production environment.
GE Fanuc Automation
Page 54 of 120
Product changes are stored in the Production_Starts table and the logical key is on PU_Id, Start_Time and End_Time. Product change records are in a continuous sequence and cannot overlap (i.e. the Start_Time of a record must be the same as the End_Time for the previous record). There must also be a record for every PU_Id and sequence has to start at 1970-01-01 00:00:00.000 (i.e. the first record in the sequence must have a Start_Time of 1970-01-01 00:00:00.000). The End_Time of the current product change record will be NULL. Product changes are related to other data through the PU_Id and time (i.e. to determine what product a production event is associated with, look for the product change record that occurred within the same time frame and on the same unit).
FROM Events e INNER JOIN Production_Starts ps ON e.PU_Id = ps.PU_Id AND e.TimeStamp > ps.Start_Time AND ( e.TimeStamp <= ps.End_Time OR ps.End_Time IS NULL) INNER JOIN Products p ON ps.Prod_Id = p.Prod_Id WHERE e.PU_Id = @PUId AND e.TimeStamp > @ReportStartTime AND e.TimeStamp <= @ReportEndTime
GE Fanuc Automation
Page 55 of 120
Description Production events Production event details Start and end time for each production event status transition
The following diagram illustrates the relationship between parent production events (A) and child production events (B,C and D) and what the quantities should be:
Production Event A: Initial_Dimension_X = 100 Final_Dimension_X = 100 Production Event A: Initial_Dimension_X = 100 Final_Dimension_X = 25
There is no standard functionality within Plant Applications to execute this quantity calculation. As such, the recommended way of implementing this calculation is to use a set of calculations that are triggered by the production events, event components and waste events. Refer to the
GE Fanuc Automation
Page 56 of 120
Consumption Calculation Best Practice document for more details on how to implement these calculations.
The production status Complete has a status of Good so any event with a status of Complete will be included in the Net Production calculation.
Determines whether to count the production event with this status in production calculations.
Determines whether to count the production event with this status in inventory calculations.
Production event statuses are stored in the Production_Status table. The key fields are as follows:
Field ProdStatus_Id ProdStatus_Desc Status_Valid_For_Input Description Unique identifier that linked to the other tables (i.e. Events.Event_Status) Text of the status (i.e. Complete, Consumed, Hold) Boolean value defining whether the status is Good/Bad (i.e. 1/0). Correspondinly, this defines whether the production event is Good or Bad. Determines whether to count the event in net production. Determines whether to count the event as inventory.
Count_For_Production Count_For_Inventory
There are a number of default reserved statuses. Of particular importance is the Consumed status, which defines whether a production event (i.e. WIP material) has been consumed in the creation of finished goods.
GE Fanuc Automation
Page 57 of 120
Production Event Status Transitions The Event_Status_Transitions table provides an easy way to query the timestamps of each status change for a production event. Whenever the production event status changes, a record will be created in the Event_Status_Transitions table that records the start and end of the status. This simplifies queries for status changes that would have previously been made against the Event_History table. For the Start_Time and End_Time fields in the Event_Status_Transitions record, the value is retrieved from the Entry_On field in the Events table and not the TimeStamp field. This allows for capture of manual status changes by the operators where the TimeStamp of the event record doesnt change. However, for certain applications such as automatically tracking the status of a batch by changing the status, it can cause some unexpected behaviour because the Events.Entry_On is automatically set when the change is committed to the database, not when the change was effected. As such, there would typically be a slight lag between the Start_Time in the Event_Status_Transitions record and the TimeStamp in the Events record.
For production events that can have partial quantities (i.e. roll of paper, batch of liquid, basket of parts, etc), we need to incorporate the dimension of the event itself, along with any waste and consumed quantities (via genealogy). The query below is an example of a query for available production events and their respective available quantities.
SELECT e.Event_Num, e.TimeStamp, ISNULL(ed.Initial_Dimension_X, 0) - SUM(ISNULL(ec.Dimension_X,0)) - SUM(ISNULL(wed.Amount,0))
AS Final_Dimension_X FROM Events e JOIN Production_Status ps ON ps.ProdStatus_Id = e.Event_Status LEFT JOIN Event_Details ed ON ed.Event_Id = e.Event_Id LEFT JOIN Event_Components ec ON e.Event_Id = ec.Source_Event_Id LEFT JOIN Waste_Event_Details wed ON wed.PU_Id = e.PU_Id -- utilizes clustered index AND wed.Event_Id = e.Event_Id WHERE e.PU_Id = 6 AND ps.Count_For_Inventory = 1 -- Filters out non-inventory production events GROUP BY e.Event_Num, -- Allows the summarization of wate and component dimensions ed.Initial_Dimension_X, e.TimeStamp HAVING (ISNULL(ed.Initial_Dimension_X, 0) -- Filters out events that have 0 available quantity - SUM(ISNULL(ec.Dimension_X,0))
GE Fanuc Automation
Page 58 of 120
- SUM(ISNULL(wed.Amount,0))) > 0
The above query recalculates Final_Dimension_X for each of the queried production events. If Final_Dimension_X is dynamically calculated as each waste and/or component record is created the query could be simplified to the following:
SELECT e.Event_Num, e.TimeStamp, ed.Final_Dimension_X
FROM Events e JOIN Production_Status ps ON ps.ProdStatus_Id = e.Event_Status LEFT JOIN Event_Details ed ON ed.Event_Id = e.Event_Id WHERE e.PU_Id = 6 AND ps.Count_For_Inventory = 1 AND ed.Final_Dimension_X > 0
However, as previously mentioned, there is no default functionality to dynamically calculate Final_Dimension_X so it would require custom calculations and/or consumption models to determine the correct number.
If Production is Accumulated From Event Dimensions is selected then Net Production is the summary of the production events Initial Dimension X field (i.e. Event_Details.Initial_Dimension_X) where the production event timestamp (i.e. Event.TimeStamp) falls within the report range and the status of the production status is defined as Count for Production. The production quantity is pro-rated over the report time frame. The quantity of any event that crosses the report start and/or end time will be multiplied by the ratio of the event duration and the portion of the event that is within the report period. It is assumed that the rate at which material was added to the production event was constant over the duration of the event.
GE Fanuc Automation
Page 59 of 120
Production Event
Report Time Frame
The following is an example of a query that is used to calculate Net Production. The query uses a standard View of the Events table available in 4.3+.
SELECT Net_Production = SUM( ed.Initial_Dimension_X * datediff(s,CASE WHEN e.Actual_Start_Time < @ReportStartTime THEN @ReportStartTime ELSE e.Actual_Start_Time END, CASE WHEN e.TimeStamp > @ReportEndTime THEN @ReportEndTime ELSE e.TimeStamp END) / datediff(s, e.Actual_Start_Time, e.TimeStamp)) FROM dbo.Events_With_StartTime e JOIN dbo.Event_Details ed ON ed.Event_Id = e.Event_Id JOIN dbo.Production_Status ps ON e.Event_Status = ps.ProdStatus_Id WHERE e.PU_Id = @ReportUnit AND e.TimeStamp > @ReportStartTime AND e.Actual_Start_Time < @ReportEndTime AND ps.Count_For_Production = 1
If Production is Accumulated From a Variable is selected then Net Production is the summary of the variable value where the variable timestamp (i.e. Tests.Result_On) falls within the report range.
SELECT Net_Production = SUM(convert(real, t.Result_On)) FROM dbo.Tests t WHERE t.Var_Id = @ProductionVariableId AND t.Result_On > @ReportStartTime AND t.Result_On <= @ReportEndTime
FROM Events e INNER JOIN Production_Starts ps ON e.PU_Id = ps.PU_Id AND e.TimeStamp > ps.Start_Time AND ( e.TimeStamp <= ps.End_Time OR ps.End_Time IS NULL)
GE Fanuc Automation
Page 60 of 120
-- NOTE: the following includes a COALESCE() function so the join will reference the Applied_Product field if it -- exists, otherwise it will reference the normal product INNER JOIN Products p ON p.Prod_Id = coalesce(e.Applied_Product, ps.Prod_Id) WHERE e.PU_Id = @PUId AND e.TimeStamp > @ReportStartTime AND e.TimeStamp <= @ReportEndTime
Inventory locations are unique by production unit and location code. All the other fields are optional. For example,
GE Fanuc Automation
Page 61 of 120
Unit_Locations
Inventory locations
Bill_Of_Material_Formulation_Item
GE Fanuc Automation
Page 62 of 120
Following is a sample query getting all available events that are 5 days ago but not older than 30 days for PU_Id 1:
SELECT eh.* FROM Event_History eh WITH (NOLOCK), (SELECT Event_Id, MAX(Modified_On) 'ModifiedOn' FROM Event_History WITH (NOLOCK) WHERE Start_Time < DATEADD(day, -5, GETDATE()) AND (Timestamp > DATEADD(day,-30,GETDATE()) OR Timestamp IS NULL) AND PU_id=1 GROUP BY Event_Id) r WHERE eh.Event_Id = r.Event_Id AND eh.Modified_On = r.ModifiedOn
The key is to get the last updated record for the event using MAX(Modified_On) within a given timeframe for every event.
Production_Setup Production_Setup_Details
GE Fanuc Automation
Page 63 of 120
The selection is stored in the Prod_Units table in the Production_Type field. The functionality of the two options is as follows:
Option Production is Accumulated From Event Dimensions Production_Type Value 0, NULL Description Summarizes all the Event_Details.Initial_Dimension_X fields for units where the Execution Path Production Point has been set to True and the timestamp of the event falls within the start and end time of the Production_Plan_Starts record. In addition, Schedule Manager also includes all events where the PP_Id is filled out in the Event_Details
GE Fanuc Automation
Page 64 of 120
table. For all units where the Execution Path Production Point has been set to True, summarizes all the result values for the defined Variable where the result timestamp falls within the start and end time of the Production_Plan_Starts record. The variable is stored in the Production_Variable field.
For accumulating quantity from the event details the query would be as follows:
SELECT pp.Process_Order, SUM(ed.Initial_Dimension_X) FROM Production_Plan_Starts pps JOIN Production_Plan pp ON pps.PP_Id = pp.PP_Id JOIN dbo.Prod_Units pu ON pps.PU_Id = pu.PU_Id AND ( Production_Type IS NULL OR Production_Type = 0) JOIN dbo.Prdexec_Path_Units ppu ON ppu.PU_Id = pu.PU_Id AND ppu.Is_Production_Point = 1 JOIN dbo.Events e ON e.PU_Id = ppu.PU_Id AND e.TimeStamp >= pps.Start_Time AND ( e.TimeStamp < pps.End_Time OR pps.End_Time IS NULL) JOIN dbo.Event_Details ed ON e.Event_Id = ed.Event_Id WHERE pps.PP_Id @@PPId GROUP BY pps.PP_Id
If the PP_Id is filled out in the Event_Details table, the Schedule Manager will exclude it from its standard query using the Production_Plan_Starts timestamps. However, if the event is on a unit that is a production point in the path and that unit has been active (i.e. it has at least one record in the Production_Plan_Starts table), it will be include it in the quantity total regardless of the timestamps. The functionality is primarily for allocating production events to different patterns associated with a process order. For accumulating quantity from a variable the query would be as follows:
SELECT pp.Process_Order, SUM(isnull(convert(real, t.Result), 0)) FROM dbo.Production_Plan_Starts pps JOIN dbo.Production_Plan pp ON pps.PP_Id = pp.PP_Id JOIN dbo.Prod_Units pu ON pps.PU_Id = pu.PU_Id AND Production_Type = 1 JOIN dbo.Prdexec_Path_Units ppu ON ppu.PU_Id = pu.PU_Id AND ppu.Is_Production_Point = 1 JOIN dbo.Tests t ON t.Var_Id = pu.Production_Variable AND t.Result_On >= pps.Start_Time AND ( t.Result_On < pps.End_Time OR pps.End_Time IS NULL) WHERE pps.PP_Id = 1 GROUP BY pp.Process_Order
GE Fanuc Automation
Page 65 of 120
Events: Event_Id = 10
Events: Event_Id = 11
Events: Event_Id = 12
Genealogy records can be created either by a genealogy model, based on the configuration and use of the Raw Material inputs, or they can simply be created on their own. The main data tables for genealogy are:
Table Name Event_Components PrdExec_Input_Event PrdExec_Input_Event_History Description Genealogy links Current state of the Raw Material Inputs (i.e. what production event is in the Running or Staged position). Historical and current state of the Raw Material Inputs (i.e. what production event is in the Running or Staged position).
GE Fanuc Automation
Page 66 of 120
record actually refers to the raw material to report on, the Event_Components.Report_As_Consumption field should be set to 1. For example,
Raw Material
Event_Components: Report_As_Consumption = 1
Event_Components: Report_As_Consumption = 0
Final Product
The Enterprise Connector service within Plant Applications will utilize this field to calculate raw material consumption for a particular process order.
Component_Id = 5
Component_Id = 6
Component_Id = 7
Component_Id = 12 Parent_Component_Id = 7
GE Fanuc Automation
Page 67 of 120
Event_id indicates what event is been load it. Event_Id_Updated is set to 1 Timestamp_Updated is set to 1 Unloaded field is set to 0 Unloaded_Update field is set to 0
Unloaded_Updated Event_Id_Updated Timestamp_Updated Unloaded
When an event is been Unload the PrdExec_Input_Event_History gets two records where: o o o o o On the first record Event_id indicates what event is been Unload and on the second is Null. Event_Id_Updated is set to 0 on the first record and 1 on the second Timestamp_Updated is set to 1 in the first record and 0 on the second Unloaded field is set to 1 on the first record and 0 on the second Unloaded_Update field is set to 1 on the 2 records
Event_Id_Updated ---------------0 1 Timestamp_Updated ----------------1 0 Unloaded -------1 0
Unloaded_Updated ---------------1 1
GE Fanuc Automation
Page 68 of 120
When an event is been Complete the PrdExec_Input_Event_History gets one record where: o o o o o Event_id is set to null. Event_Id_Updated is set to 1 Timestamp_Updated is set to 1 Unloaded field is set to 0 Unloaded_Update field is set to 0
Event_Id Unloaded_Updated Event_Id_Updated Timestamp_Updated Unloaded ----------- ---------------- ---------------- ----------------- -------NULL 0 1 1 0
The following is a query example to help identify what event was running depending on a Line, Unit and input position
SELECT Prod_Lines.PL_Desc_Local AS Line, Prod_Units.PU_Desc_Local AS Unit, PrdExec_Inputs.Input_Name, PrdExec_Input_Positions.PEIP_Desc AS Position, Events.Event_Id AS EventId, Events.Event_Num AS [Even Number], PrdExec_Input_Event_History.Unloaded AS Unload FROM PrdExec_Inputs INNER JOIN PrdExec_Input_Event_History ON PrdExec_Inputs.PEI_Id = PrdExec_Input_Event_History.PEI_Id INNER JOIN Prod_Units ON PrdExec_Inputs.PU_Id = Prod_Units.PU_Id INNER JOIN Prod_Lines ON Prod_Units.PL_Id = Prod_Lines.PL_Id INNER JOIN Events ON Prod_Units.PU_Id = Events.PU_Id INNER JOIN PrdExec_Input_Positions ON PrdExec_Input_Event_History.PEIP_Id = PrdExec_Input_Positions.PEIP_Id WHERE (Prod_Lines.PL_Desc_Local = 'XXX') -- Line Name AND (Prod_Units.PU_Desc_Local = 'YYY') -- Unit Name AND (PrdExec_Inputs.Input_Name = 'ZZZ') -- Input Name AND (PrdExec_Input_Positions.PEIP_Desc = 'Running') -- Position AND (PrdExec_Input_Event_History.Unloaded_Updated = 1) -- This indicates if the input was change or not AND (PrdExec_Input_Event_History.Unloaded = 1) -- The change is Unload.
11.5 Downtime
The main data tables for downtime are:
Table Name Timed_Event_Details Description Downtime records
GE Fanuc Automation
Page 69 of 120
Production units Downtime faults Reasons Reason tree Reason tree category assignments Categories
Records in the Timed_Event_Details table are unique by PU_Id and Start_Time. They must also always be in sequence and cannot overlap (i.e. the Start_Time of a record cannot be less than the End_Time of the previous record). In the Timed_Event_Details table the End_Time will be NULL for an open downtime record. Duration should always be calculated as the difference between the Start_Time and End_Time fields as the Duration field is not consistently updated.
The following query will return all downtime records that fall within a given reporting period, even if the Start_Time or End_Time is outside of it and, by using a Case statement, restricts the calculation of duration to within the reporting period.
WHEN Start_Time < @ReportStartTime THEN @ReportStartTime ELSE Start_Time END, CASE WHEN End_Time > @ReportEndTime OR End_Time IS NULL THEN @ReportEndTime ELSE End_Time END) AS Duration FROM Timed_Event_Details WHERE PU_Id = @PUId AND Start_Time < @ReportEndTime AND ( End_Time > @ReportStartTime OR End_Time IS NULL) SELECT datediff(s, CASE
StartTime,
GE Fanuc Automation
Page 70 of 120
EndTime, Downtime) SELECT Start_Time, End_Time, datediff(s, Start_Time, End_Time) FROM Timed_Event_Details WHERE PU_Id = @PUId AND Start_Time < @ReportEndTime AND (End_Time > @ReportStartTime OR End_Time IS NULL) ORDER BY Start_Time ASC UPDATE d1 SET Uptime = datediff(s, d2.EndTime, d1.StartTime) FROM @Downtime d1 INNER JOIN @Downtime d2 ON d2.DowntimeId = (d1.DowntimeId - 1) WHERE d1.DowntimeId > 1 SELECT StartTime, EndTime, Downtime, Uptime FROM @Downtime
GE Fanuc Automation
Page 71 of 120
11.6 Waste
The main data tables for waste are:
Table Name Waste_Event_Details Description Waste event records
GE Fanuc Automation
Page 72 of 120
Event_Reason_Category_Data Event_Reason_Catagories
Waste event records are very similar in concept to downtime, except that instead of quantifying faults by duration, they are quantifying them by an amount of material lost.
11.7 Quality
The main data tables for quality are:
Table Name Events Tests Var_Specs Production_Starts Description Production events Variable results Variable specifications Product/Grade changes
Variable specifications are stored in the Var_Specs table. The logical key for the table is as follows:
Field Var_Id Prod_Id Effective_Date Expiration_Date Description The variable the specifications were entered on. The product the specifications were entered on. The timestamp of when the specification transaction was approved. The timestamp of when the specification transaction expired. For the current time, this is normally NULL and is set when a new transaction is created. However, if a timed-limited transaction is created it will be preset.
The behaviour of the Var_Specs table is similar to that of the Production_Starts table in that the specification records have to be in sequence and cannot overlap (i.e. the Effective_Time of a record must be the greater than or equal to the Expiration_Date of the previous record). This behaviour is enforced by the Plant Applications when creating transactions through the Administrator. Central specifications are stored in the Active_Specs table and are automatically copied to the Var_Specs table when changes are made. As such, Var_Specs is the preferred table to report on.
GE Fanuc Automation
Page 73 of 120
Consideration of the SpecificationSetting site parameter (Parm_Id = 13) should also be taken. This site parameter controls whether a value equal to a limit is out-of-spec or not. It primarily affect the way specification deviations are displayed in the Plant Applications Clients. SpecificationSetting has two possible values and they are as follows:
Value 1 2 Description The value is considered out-of-spec if Lower Limit > Value > Upper Limit The value is considered out-of-spec if Lower Limit >= Value >= Upper Limit
SELECT @SpecificationSetting = Value FROM Site_Parameters WHERE Parm_Id = 13 SELECT v.Var_Desc, t.Result, vs.L_Reject, vs.L_Warning, vs.L_User, vs.Target, vs.U_User, vs.U_Warning, vs.U_Reject, CASE @SpecificationSetting WHEN 1 THEN CASE WHEN convert(float, t.Result) > convert(float, vs.U_Warning) THEN ' WARNING' ELSE '' END WHEN 2 THEN CASE WHEN convert(float, t.Result) >= convert(float, vs.U_Warning) THEN 'WARNING' ELSE '' END END
FROM Tests t JOIN Variables v ON t.Var_Id = v.Var_Id JOIN Production_Starts ps ON v.PU_Id = ps.PU_Id AND t.Result_On >= ps.Start_Time AND ( t.Result_On < ps.End_Time OR ps.End_Time IS NULL) LEFT JOIN Var_Specs vs ON t.Var_Id = vs.Var_Id AND ps.Prod_Id = vs.Prod_Id AND t.Result_On >= vs.Effective_Date AND ( t.Result_On < vs.Expiration_Date OR vs.Expiration_Date IS NULL) WHERE t.Var_Id = @ReportVarId AND t.Result_On > @ReportStartTime AND t.Result_On < @ReportEndTime
GE Fanuc Automation
Page 74 of 120
Description Transactions Transaction groups Variable specification changes Product to unit assignments Central specification changes Characteristic to product/unit assignments Characteristic tree
-- Fill out the transaction data tables -- Approve the transaction EXEC spEM_ApproveTrans @Trans_Id, 1, 1, NULL, @ApprovedDate OUTPUT, @Effective_Date OUTPUT
The main configuration tables for the crew and shift schedule are:
Table Name Prod_Units Description Production units
The Crew_Schedule table holds detailed records for each shift change. Instead of containing a pattern or formula for calculating crew and shift, it contains a time-stamped record for every shift change. This table is normally filled out by hand in Excel and imported for a defined amount of time determined by the plant actual crew schedule forecast. Shift/Crew changes are related to other data through the PU_Id and time (i.e. to determine what shift a production event is associated with, look for the shift change record that occurred within the same time frame and on the same unit).
GE Fanuc Automation
Page 75 of 120
Table_Fields ED_FieldTypes
GE Fanuc Automation
Page 76 of 120
If TableId = 1 (Events) Then KeyId = Event_Id If TableId = 13 (PrdExec_Paths) Then KeyId = Path_Id If TableId = 43 (Prod_Units) Then KeyId = PU_Id
The main data table for UDPs is Table_Field_Values. The logical key for the table is as follows:
Field TableId Table_Field_Id KeyId Description The table the UDP is associated with (i.e. Prod_Units, Department, etc) The UDP itself. A particular UDP can be associated with multiple different tables. The unique primary key identifier for the table defined by TableId.
The UDP value is always stored as a string so the field type is generally not necessary when retrieving the UDP value, as the data type is already known. As such, the field type is only important for creating a new UDP. The KeyId value is dependent on which table the UDP is for. For example, if the TableId corresponded to the Prod_Units table (i.e. Production Units), then the KeyId will be equal to a particular PU_Id. If the TableId corresponded to the PrdExec_Paths table (i.e. Execution Paths) then the KeyId will be equal to a particular Path_Id. The following is an example of the configuration of a UDP for a Production Unit in the Administrator:
GE Fanuc Automation
Page 77 of 120
The following is an example query of how to retrieve the UDP value for the above configuration:
SELECT tfv.Value FROM Table_Fields_Values tfv JOIN Tables t ON tfv.TableId = t.TableId JOIN Table_Fields tf ON tf.Table_Field_Id = tfv.Table_Field_Id JOIN Prod_Units pu ON tfv.KeyId = pu.PU_Id -- This depends on the TableId being referenced WHERE t.TableName = 'Prod_Units' AND tf.Table_Field_Desc = 'MyUDP' AND pu.PU_Desc = 'Machine 1'
The following is an example configuration of a UDP for an Execution Path in the Administrator:
The following is an example query of how to retrieve the UDP value for the above configuration:
SELECT tfv.Value FROM Table_Fields_Values tfv JOIN Tables t ON tfv.TableId = t.TableId JOIN Table_Fields tf ON tf.Table_Field_Id = tfv.Table_Field_Id JOIN PrdExec_Paths p ON tfv.KeyId = p.Path_Id WHERE t.TableName = 'PrdExec_Paths' AND tf.Table_Field_Desc = 'MyUDP' AND p.Path_Code = 'M1'
GE Fanuc Automation
Page 78 of 120
While only a few of the tables have an interface (either in the Client or Administrator) there are many tables currently defined in Tables and more are continually being added. Another thing to note is that while UDPs are typically created for custom configuration, they can also be used to track additional information in data tables (i.e. Events). The following table lists a subset of the available SQL tables and the corresponding keys that are reference in Table_Fields_Values.
Table Id 1 2 3 4 5 6 7 8 9 10 11 13 43 12 14 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 Table Events Production_Starts Timed_Event_Details Waste_Event_Details PrdExec_Input_Event PrdExec_Input_Event_History Production_Plan Production_Setup Production_Setup_Detail Event_Components User_Defined_Events PrdExec_Paths Prod_Units Production_Plan_Starts Event_Details Departments Prod_Lines PU_Groups Variables Product_Family Product_Groups Products Event_Reasons Event_Reason_Catagories Bill_Of_Material_Formulation Subscription Bill_Of_Material_Formulation_Item Subscription_Group PrdExec_Path_Units Report_Types Report_Definitions Report_Runs Production_Plan_Statuses Key Event_Id Start_Id TEDet_Id WED_Id Input_Event_Id Input_Event_History_Id PP_Id PP_Setup_Id PP_Setup_Detail_Id Component_Id UDE_Id Path_Id PU_Id PP_Start_Id Event_Id Dept_Id PL_Id PUG_Id Var_Id Product_Family_Id Product_Grp_Id Prod_Id Reason_Id ERC_Id BOM_Formulation_Id Subscription_Id BOM_Formulation_Item_Id Subscription_Group_Id PEPU_Id Report_Type_Id Report_Id Run_Id PP_Status_Id
11.11 Language
Language support in Plant Applications has 2 flavours: 1) The standard client components have a defined list of translations that can be installed during setup. These will be referenced depending on the language setting of the user. 2) Most of the configuration tables support both a local and a global language which allows users to see their configuration in one of 2 languages.
GE Fanuc Automation
Page 79 of 120
The above 2 options are described in more detail in the Administrator documentation under MultiLingual Support. The main data tables for multi-lingual support are:
Table Name Languages Language_Data Description Fixed content table that lists the available languages. Contains the prompts and translations for the client application components (i.e. Plant Applications Client, Web Server and Excel Add-In). This table contains data for the installed languages. Contains the default language reference for all users. The LanguageNumber parameter (Parm_Id = 8) contains the Language_Id from the Languages table. Contains the language reference for a particular user. The LanguageNumber parameter (Parm_Id = 8) contains the Language_Id from the Languages table. Each configuration table (i.e. Reasons, Prod_Units, Variables, etc) contains a _Local and _Global description field, which allows 2 translation options for created configuration items.
Site_Parameters
User_Parameters
GE Fanuc Automation
Page 80 of 120
FROM Site_Parameters WHERE Parm_Id = 8 SELECT @LanguageId = coalesce(convert(int, Value), @LanguageId) FROM User_Parameters WHERE Parm_Id = 8 AND User_Id = 1
The override value is stored in the Language_Data table in an additional record but the Language_Id is set to a negative value instead (i.e. Language_Id = 2 becomes Language_Id = -2). As such, to retrieve the value for a particular prompt, 2 records must be selected. For example,
SELECT coalesce(ldo.Prompt_String, lds.Prompt_String) FROM Language_Data lds LEFT JOIN Language_Data ldo ON ldo.Language_Id = (-lds.Language_Id) AND ldo.Prompt_Number = lds.Prompt_Number WHERE lds.Language_Id = 2 -- French AND lds.Prompt_Number = 30001
Additional records can be added to both the Languages table and Language_Data tables to support new languages and/or custom reports. For the Languages table, new records should start with a Language_Id > 5000 while new records in the Language_Data table should start with a Prompt_Number > 500000.
GE Fanuc Automation
Page 81 of 120
SELECT @SiteLanguageId = convert(int, Value) FROM Site_Parameters WHERE Parm_Id = 8 SELECT @UserLanguageId = coalesce(convert(int, Value), @SiteLanguageId) FROM User_Parameters WHERE Parm_Id = 8 AND User_Id = 1 SELECT CASE @UserLanguageId <> @SiteLanguageId OR @UserLanguageId IS NULL THEN coalesce(Var_Desc_Global, Var_Desc_Local) ELSE Var_Desc_Local END WHEN
GE Fanuc Automation
Page 82 of 120
Added languages Added inventory and net production Added NOLOCK, Monitor Blocking and more information on index choices Added Event_History section
GE Fanuc Automation
Page 83 of 120
e l c i t r A = b a T y a l p s i D & 9 3 0 8 3 = D I e l ci t r A ? m f c . x e d n I / s e l c i t r A / m o c . o r p t i s w o d n i w . w w w / / : p t t h
mt h.t c el e s _ d n a _t e s _ n e e wt e b _ s e c n e r effi d/ m o c. d o pi rt. n k s a y v//: ptt h mt h. w ei v r e v o _ eli p m o c e r p s/ eli p m o c e r p s/ el p m a s/ st c u d o r p/ m o c.tf o si di. w w w//: ptt h p s a. s eli p m o c e r _ p s _ g ni zi mit p o _ d r/ m o c. e c n a m r of r e p - r e v r e s -l q s. w w w//: ptt h m o c. e c n a m r of r e p - r e v r e s -l q s. w w w
http://www.sqlservercentral.com/columnists/WFillis/2764.asp
13.0 References
Page 84 of 120
Type 1 2 3 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 50
Description Production Events Variable Test Values Grade Change Downtime Events Alarms Sheet Columns User Defined Events Waste Events Event Details Genealogy Event Components Genealogy Input Events Defect Details Historian Read Production Plan Production Setup Production Plan Starts Production Path Unit Starts Production Statistics Historian Write Output File
The standard stored procedure spServer_CmnShowResultSets provides the current list and basic formats of the result sets available. More detailed information is contained within this Appendix.
GE Fanuc Automation
Page 85 of 120
3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Event Id Event Number Unit Id Timestamp Applied Product Source Event Event Status Confirmed User Id Update Type Conformance TestPctComplete Start Time Transaction Number Testing Status Comment Id Event Sub Type Id Entry TimeStamp
Events.Start_Time
The genealogy is best done using the Genealogy Event Components result sets and tables. However, the Source_Event_Id provides some functionality on its own. If the Source_Event_Id field is filled out and the parent event is deleted, then the child event will also be deleted automatically.
14.1.01 Example
DECLARE @PUId @EventStatus int, int int DEFAULT 1, int IDENTITY, int DEFAULT 1, int NULL, varchar(50) NULL, int NULL, varchar(50) NULL, int NULL, int NULL, int NULL, int DEFAULT 1, int NULL,
DECLARE @Events TABLE ( ResultSetType Id TransType EventId EventNum PUId TimeStamp AppliedProduct SourceEventId EventStatus Confirmed UserId
GE Fanuc Automation
Page 86 of 120
int DEFAULT 0)
SELECT @EventStatus = ProdStatus_Id FROM Production_Status WHERE ProdStatus_Desc = 'Complete' INSERT INTO @Events ( EventNum, PUId, TimeStamp, EventStatus)
VALUES ('ABC123', @PUId, convert(varchar(50), getdate(), 120), @EventStatus) -- Output results SELECT ResultSetType, Id, TransType, EventId, EventNum, PUId, TimeStamp, AppliedProduct, SourceEvent, EventStatus, Confirmed, UserId, PostUpdate FROM @Events ORDER BY Id ASC
GE Fanuc Automation
Page 87 of 120
The following should be taken into consideration when using the Variable Values result set: There is no delete functionality with the Variable Values result set so to effectively delete variable values you must update the value to NULL. If the update type is set to post-update, calculations that depend on the variable value will not be fired.
14.2.01 Example
DECLARE @VarId @PUId @VarPrecision int, int, int
DECLARE @VariableResults TABLE ( ResultSetType int DEFAULT 2, VarId int NULL, PUId int NULL, UserId int NULL, Cancelled int DEFAULT 0, Result varchar(50) NULL, ResultOn varchar(50) NULL, TransType int DEFAULT 1, PostUpdate int DEFAULT 0) SELECT @VarId @PUId @VarPrecision = Var_Id, = PU_Id, = Var_Precision
FROM Variables WHERE Var_Desc = 'MyVariable' INSERT INTO @VariableResults ( VarId, PUId, Result, ResultOn)
VALUES (
GE Fanuc Automation
Page 88 of 120
VarId, PUId, UserId, Cancelled, Result, ResultOn, TransType, PostUpdate FROM @VariableResults
GE Fanuc Automation
Page 89 of 120
Order 0 1 2 3 4 5 6
Field Name Result Set Type Grade Change Id Unit Id Product Id TimeStamp Update Type User Id
Values/Table Reference 3 Production_Starts.Start_Id Production_Starts.PU_Id Prod_Units.PU_Id Products.Prod_Id Production_Starts.Start_Time 0 = Pre-Update 1 = Post-Update Production_Starts.User_Id Users.User_Id
14.3.01 Example
DECLARE @PUId @ProdId int int,
DECLARE @ProductionStarts TABLE ( ResultSetType int DEFAULT 3, StartId int NULL, PUId int NULL, ProdId int NULL, StartTime varchar(50) NULL, PostUpdate int DEFAULT 0) SELECT @PUId = PU_Id FROM Prod_Units WHERE PU_Desc = 'MyUnit' SELECT @ProdId = Prod_Id FROM Products WHERE Prod_Code = 'MyProductCode'
GE Fanuc Automation
Page 90 of 120
-- Output results SELECT ResultSetType, StartId, PUId, ProdId, StartTime, PostUpdate FROM @ProductionStarts
GE Fanuc Automation
Page 91 of 120
12 13 14
The Downtime result sets do not have a post-update option so you shouldnt attempt to manually insert records and then issue a result set. If you do, you risk undoing changes that youve already made. For example, if you open an event and issue the result set and then close the event a second later before the DBMgr has had a chance to process the result set, the DBMgr will end up reopening the event when it does get around to processing the result set.
14.4.01 Example
DECLARE @MachineDown @PUId @LocationId @ReasonId1 @StartTime int, int, int, int, datetime
DECLARE @DowntimeEvents TABLE ( ResultSetType int DEFAULT 5, PUId int NULL, SourcePUId int NULL, StatusId int NULL, FaultId int NULL, ReasonLevel1 int NULL, ReasonLevel2 int NULL,
GE Fanuc Automation
Page 92 of 120
ReasonLevel3 ReasonLevel4 ProdRate Duration TransType StartTime EndTime TEDetId SELECT @PUId = PU_Id FROM Prod_Units WHERE PU_Desc = 'MyUnit' SELECT @LocationId = PU_Id FROM Prod_Units WHERE PU_Desc = 'MyLocation'
int NULL, int NULL, int NULL, float NULL, int Default 1, varchar(50) NULL, varchar(50) NULL, int NULL)
SELECT @ReasonId1 = Event_Reason_Id FROM Event_Reasons WHERE Event_Reason_Name = 'MyReason' IF @MachineDown = 1 BEGIN -- The following opens the downtime event INSERT INTO @DowntimeEvents ( PUId, SourcePUId, ReasonLevel1, StartTime) VALUES (@PUId, @LocationId, @ReasonId1, convert(varchar(50), getdate(), 120)) END ELSE BEGIN -- Get the current downtime event SELECT @StartTime = Start_Time FROM Timed_Event_Details WHERE PU_Id = @PUId AND Start_Time <= getdate() AND End_Time IS NULL -- The following closes the downtime event INSERT INTO @DowntimeEvents ( TransType, PUId, StartTime, EndTime) VALUES ( 4, @PUId, @StartTime, convert(varchar(50), getdate(), 120)) END -- Output results SELECT ResultSetType, PUId, SourcePUId, StatusId, FaultId, ReasonLevel1, ReasonLevel2, ReasonLevel3, ReasonLevel4, ProdRate, Duration , TransType, StartTime, EndTime,
GE Fanuc Automation
Page 93 of 120
GE Fanuc Automation
Page 94 of 120
14.5 Alarms
The Alarm result set is used for notifying clients about alarms. The result set alone will not create the alarm so the alarm record has to be created manually in the table before the result set is issued. Furthermore, the alarm has to be started and then ended separately for the alarm result set to work (i.e. the alarm must be opened by issuing a result set with a Null End Time and then closed by issuing a result set with the End Time filled out).
Order 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
Field Name Result Set Type Update Type Transaction Number Alarm Id Alarm Template Data Id Start Time End Time Duration Acknowledged Acknowledged Timestamp Acknowledged By Starting Value Ending Value Minimum Value Maximum Value Cause 1 Cause 2 Cause 3 Cause 4 Cause Comment Id Action 1 Action 2 Action 3 Action 4 Action Comment Id Research User Id Research Status Id Research Open Date
Values/Table Reference 6 0 = Pre-Update 1 = Post-Update Alarms.Alarm_Id Alarm_Template_Var_Data.ATD_Id Alarms.Start_Time Alarms.End_Time Alarms.Duration Alarms.Ack Alarms.Ack_On Alarms.Ack_By Alarms.Start_Result Alarms.End_Result Alarms.Min_Result Alarms.Max_Result Alarms.Cause1 Event_Reasons.Reason_Id Alarms.Cause2 Event_Reasons.Reason_Id Alarms.Cause3 Event_Reasons.Reason_Id Alarms.Cause4 Event_Reasons.Reason_Id Alarms.Cause_Comment_Id Comments.Comment_Id Alarms.Action1 Event_Reasons.Reason_Id Alarms.Action1 Event_Reasons.Reason_Id Alarms.Action1 Event_Reasons.Reason_Id Alarms.Action1 Event_Reasons.Reason_Id Alarms.Action_Comment_Id Comments.Comment_Id Alarms.Research_User_Id Users.User_Id Alarms.Research_Status_Id Alarms.Research_Open_Date
GE Fanuc Automation
Page 95 of 120
28 29 30 31 32 33 34
Research Close Date Research Comment Id Source PU Id Alarm Type Id Key Id Alarm Description Transaction Type
35 36 37 38 39
Template Variable Comment Id Alarm Priority Id Alarm Template Id Variable Comment Id Cutoff
Alarms.Research_Close_Date Alarms.Research_Comment_Id Comments.Comment_Id Alarms.Source_PU_Id Prod_Units.PU_Id Alarms.Alarm_Type_Id Alarm_Types.Alarm_Type_Id Variables.Var_Id Alarm_Template_Var_Data.Var_Id Alarms.Alarm_Desc 1 = Add 2 = Update 3 = Delete Alarm_Templates.Comment_Id Comments.Comment_Id Alarm_Templates.AP_Id Alarm_Priorities.AP_Id Alarm_Templates.AT_Id Variables.Comment_Id Comments.Comment_Id Alarms.Cutoff
14.5.01 Example
DECLARE @TimeStamp @AlarmId @ATDId @AlarmTypeId @ATId @ATDesc @VarId @VarDesc @AlarmCount datetime, int, int, int, int, varchar(50), int, varchar(50), int int DEFAULT 6, int DEFAULT 0, int DEFAULT 0, int NULL, int NULL, varchar(50) NULL, varchar(50) NULL, float NULL, bit DEFAULT 0, varchar(50) NULL, int NULL, varchar(50) NULL, varchar(50) NULL, varchar(50) NULL, varchar(50) NULL, int NULL, int NULL, int NULL, int NULL, int NULL, int NULL, int NULL, int NULL, int NULL, int NULL, int NULL, int NULL,
DECLARE @Alarms TABLE ( ResultSetType PreUpdate TransNum AlarmId ATDId StartTime EndTime Duration Ack AckOn AckBy StartResult EndResult MinResult MaxResult Cause1 Cause2 Cause3 Cause4 CauseCommentId Action1 Action2 Action3 Action4 ActionCommentId ResearchUserId ResearchStatusId
GE Fanuc Automation
Page 96 of 120
ResearchOpenDate ResearchCloseDate ResearchCommentId SourcePUId AlarmTypeId KeyId AlarmDesc TransType TemplateVariableCommentId APId ATId VarCommentId Cutoff SELECT @ATId @AlarmTypeId FROM Alarm_Templates WHERE AT_Desc = @ATDesc
varchar(50) NULL, varchar(50) NULL, int NULL, int NULL, int NULL, int NULL, char(50), int NULL, int NULL, int NULL, int NULL, int NULL, tinyint NULL) = AT_Id, = Alarm_Type_Id
SELECT @VarId = Var_Id FROM Variables WHERE PU_Id = @PUId AND Var_Desc = @VarDesc SELECT @ATDId = ATD_Id FROM Alarm_Template_Var_Data WHERE Var_Id = @VarId AND AT_Id = @ATId IF @VarId IS NOT NULL AND @ATId IS NOT NULL AND @ATDId IS NOT NULL BEGIN SELECT @AlarmCount = count(Alarm_Id) + 1 FROM Alarms WHERE ATD_Id = @ATDId AND Key_Id = @VarId AND Start_Time = @TimeStamp INSERT Alarms ( ATD_Id, Start_Time, Start_Result, Alarm_Type_Id, Key_Id, Alarm_Desc, User_Id )
VALUES (
@ATDId, @TimeStamp, @AlarmCount, @AlarmTypeId, @VarId, @Message, @UserId) SELECT @AlarmId = @@Identity INSERT @Alarms ( PreUpdate, TransNum, AlarmId, ATDId, StartTime, EndTime, Duration, Ack, AckOn, AckBy, StartResult, EndResult,
GE Fanuc Automation
Page 97 of 120
MinResult, MaxResult, Cause1, Cause2, Cause3, Cause4, CauseCommentId, Action1, Action2, Action3, Action4, ActionCommentId, ResearchUserId, ResearchStatusId, ResearchOpenDate, ResearchCloseDate, ResearchCommentId, SourcePUId, AlarmTypeId, KeyId, AlarmDesc, TransType, TemplateVariableCommentId, APId, ATId, VarCommentId, Cutoff) SELECT 0, 0, a.Alarm_Id, a.ATD_Id, a.Start_Time, a.End_Time, a.Duration, a.Ack, a.Ack_On, a.Ack_By, a.Start_Result, a.End_Result, a.Min_Result, a.Max_Result, a.Cause1, a.Cause2, a.Cause3, a.Cause4, a.Cause_Comment_Id, a.Action1, a.Action2, a.Action3, a.Action4, a.Action_Comment_Id, a.Research_User_Id, a.Research_Status_Id, a.Research_Open_Date, a.Research_Close_Date, a.Research_Comment_Id, a.Source_PU_Id, a.Alarm_Type_Id, a.Key_Id, a.Alarm_Desc, 1, d.Comment_Id, t.AP_Id, d.AT_Id, v.Comment_Id, 0 FROM Alarms a INNER JOIN Variables v ON a.Key_Id = v.Var_Id
GE Fanuc Automation
Page 98 of 120
INNER JOIN Alarm_Template_Var_Data d ON a.ATD_Id = d.ATD_Id INNER JOIN Alarm_Templates t ON d.AT_Id = t.AT_Id WHERE a.Alarm_Id = @AlarmId UPDATE Alarms SET End_Time = dateadd(minute, 1, @TimeStamp) WHERE Alarm_Id = @AlarmId INSERT @Alarms ( PreUpdate, TransNum, AlarmId, ATDId, StartTime, EndTime, Duration, Ack, AckOn, AckBy, StartResult, EndResult, MinResult, MaxResult, Cause1, Cause2, Cause3, Cause4, CauseCommentId, Action1, Action2, Action3, Action4, ActionCommentId, ResearchUserId, ResearchStatusId, ResearchOpenDate, ResearchCloseDate, ResearchCommentId, SourcePUId, AlarmTypeId, KeyId, AlarmDesc, TransType, TemplateVariableCommentId, APId, ATId, VarCommentId, Cutoff)
SELECT
0, 0, a.Alarm_Id, a.ATD_Id, a.Start_Time, a.End_Time, a.Duration, a.Ack, a.Ack_On, a.Ack_By, a.Start_Result, a.End_Result, a.Min_Result, a.Max_Result, a.Cause1, a.Cause2, a.Cause3, a.Cause4, a.Cause_Comment_Id, a.Action1, a.Action2,
GE Fanuc Automation
Page 99 of 120
a.Action3, a.Action4, a.Action_Comment_Id, a.Research_User_Id, a.Research_Status_Id, a.Research_Open_Date, a.Research_Close_Date, a.Research_Comment_Id, a.Source_PU_Id, a.Alarm_Type_Id, a.Key_Id, a.Alarm_Desc, 2, d.Comment_Id, t.AP_Id, d.AT_Id, v.Comment_Id, 0 FROM Alarms a INNER JOIN Variables v ON a.Key_Id = v.Var_Id INNER JOIN Alarm_Template_Var_Data d ON a.ATD_Id = d.ATD_Id INNER JOIN Alarm_Templates t ON d.AT_Id = t.AT_Id WHERE a.Alarm_Id = @AlarmId SELECT ResultSetType, PreUpdate, TransNum, AlarmId, ATDId, StartTime, EndTime, Duration, Ack, AckOn, AckBy, StartResult, EndResult, MinResult, MaxResult, Cause1, Cause2, Cause3, Cause4, CauseCommentId, Action1, Action2, Action3, Action4, ActionCommentId, ResearchUserId, ResearchStatusId, ResearchOpenDate, ResearchCloseDate, ResearchCommentId, SourcePUId, AlarmTypeId, KeyId, AlarmDesc, TransType, TemplateVariableCommentId, APId, ATId, VarCommentId, Cutoff FROM @Alarms END
GE Fanuc Automation
4 5
14.6.01 Example
DECLARE @SheetId int int DEFAULT 7, int NULL, int NULL, int DEFAULT 1, varchar(50) NULL, int DEFAULT 0) CREATE TABLE @SheetColumns ( ResultSetType SheetId UserId TransType TimeStamp PostUpdate SELECT @SheetId = Sheet_Id FROM Sheets WHERE Sheet_Desc = 'MySheet' INSERT INTO @SheetColumns ( SheetId, TimeStamp)
VALUES (@SheetId, convert(varchar(50), getdate(), 120)) -- Output results SELECT ResultSetType, SheetId, UserId, TransType, TimeStamp, PostUpdate FROM @SheetColumns
GE Fanuc Automation
GE Fanuc Automation
29 30 31
3 = Delete Event_Subtypes.Event_Subtype_Desc
User_Defined_Events.User_Id Users.User_Id
14.7.01 Example
DECLARE @UserDefinedEvents ( ResultSetType PreUpdate UDEId UDEDesc PUId EventSubTypeId StartTime EndTime Duration Ack AckOn AckBy Cause1 Cause2 Cause3 Cause4 CauseCommentId Action1 Action2 Action3 Action4 ActionCommentId ResearchUserId ResearchStatusId ResearchOpenDate ResearchCloseDate ResearchCommentId CommentId TransType EventSubTypeDesc TransNum UserId int DEFAULT 8, int DEFAULT 1, int NULL, varchar(50) NULL, int NULL, int NULL, datetime NULL, datetime NULL, int NULL, int DEFAULT 0, datetime NULL, int NULL, int NULL, int NULL, int NULL, int NULL, int NULL, int NULL, int NULL, int NULL, int NULL, int NULL, int NULL, int NULL, datetime NULL, datetime NULL, int NULL, int NULL, int DEFAULT 1, varchar(50) NULL, int DEFAULT 0, int NULL)
GE Fanuc Automation
5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
Waste Event Id PU_Id Location Type Id Measure Id Reason 1 Reason 2 Reason 3 Reason 4 Event Id Amount Marker 1 Marker 2 Timestamp Action 1 Action 2 Action 3 Action 4 Action Comment Id Research Comment Id Research Status Id Research Open Date Research Close Date
GE Fanuc Automation
28 29 30
If the Transaction Number is set to a 0, it means that a value of 0 returned for any dimension fields will be set to NULL in the database. If the Transaction Number is set to a 2, it means that a value of 0 returned for any dimension fields will be set to 0 in the database.
14.8.01 Example
DECLARE @WasteEvents TABLE ( ResultSetType PreUpdate TransNum UserId TransType WEDId PUId SourcePUId WETId WEMTId Cause1 Cause2 Cause3 Cause4 EventId Amount Marker1 Marker2 TimeStamp Action1 Action2 Action3 Action4 ActionCommentId ResearchCommentId ResearchStatusId ResearchOpenDate ResearchCloseDate CommentId TargetProdRate ResearchUserId int DEFAULT 9, int DEFAULT 1, int DEFAULT 0, int NULL, int DEFAULT 1, int NULL, int NULL, int NULL, int NULL, int NULL, int NULL, int NULL, int NULL, int NULL, int NULL, float NULL, float NULL, float NULL, datetime NULL, int NULL, int NULL, int NULL, int NULL, int NULL, int NULL, int NULL, datetime NULL, datetime NULL, int NULL, float NULL, int NULL)
GE Fanuc Automation
4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
Transaction Number Event Id Unit Id Primary Event Number Alternate Event Number Comment Id Event Sub Type Id Original Product Applied Product Event Status Timestamp Entry On Production Plan Setup Detail Id Shipment Item Id Order Id Order_Line_Id Production Plan Id Initial_Dimension X Initial_Dimension Y Initial_Dimension Z Initial_Dimension A Final_Dimension X Final_Dimension Y Final_Dimension Z Final_Dimension A Orientation X Orientation Y Orientation Z
If the Transaction Number is set to a 0, it means that a value of 0 returned for any dimension fields will be set to NULL in the database. If the Transaction Number is set to a 2, it means that a value of 0 returned for any dimension fields will be set to 0 in the database.
GE Fanuc Automation
14.9.01 Example
DECLARE @EventDetails TABLE ( ResultSetType PostUpdate UserId TransType TransNum EventId PUId PrimaryEventNum AlternateEventNum CommentId EventType OriginalProduct AppliedProduct EventStatus TimeStamp EnteredOn PPSetupDetailId ShipmentItemId OrderId OrderLineId PPId InitialDimensionX InitialDimensionY InitialDimensionZ InitialDimensionA FinalDimensionX FinalDimensionY FinalDimensionZ FinalDimensionA OrientationX OrientationY OrientationZ int DEFAULT 10, int DEFAULT 1, int DEFAULT 1, int DEFAULT 1, int NULL, int NULL, int NULL, varchar(25) NULL, varchar(25) NULL, int NULL, int NULL, int NULL, int NULL, int NULL, datetime NULL, datetime NULL, int NULL, int NULL, int NULL, int NULL, int NULL, float NULL, float NULL, float NULL, float NULL, float NULL, float NULL, float NULL, float NULL, tinyint NULL, tinyint NULL, tinyint NULL)
GE Fanuc Automation
4 5 6 7 8 9 10 11
Transaction Number Component Id Event Id Source Event Id Dimension X Dimension Y Dimension Z Dimension A
If the Transaction Number is set to a 0, it means that a value of 0 returned for any dimension fields will be set to NULL in the database. If the Transaction Number is set to a 2, it means that a value of 0 returned for any dimension fields will be set to 0 in the database.
14.10.01 Example
DECLARE @EventComponents TABLE ( ResultSetType int DEFAULT 11, PreUpdate int DEFAULT 0, UserId int NULL, TransType int DEFAULT 1, TransNum int NULL, ComponentId int NULL, EventId int NULL, SourceEventId int NULL, DimensionX float NULL, DimensionY float NULL, DimensionZ float NULL, DimensionA float NULL)
GE Fanuc Automation
4 5 6 7
8 9 10 11 12 13 14 15
Production Event Input Id Production Event Input Position Id Event Id Dimension X Dimension Y Dimension Z Dimension A Unloaded
GE Fanuc Automation
14.12 Defects
Order 0 1 2 Field Name Result Set Type Update Type Transaction Type Values/Table Reference 13 0 = Pre-Update 1 = Post-Update 1 = Add 2 = Update 3 = Delete Defect_Details.Defect_Detail_Id Defect_Details.Defect_Type_Id Defect_Types.Defect_Type_Id Defect_Details.Cause1 Event_Reasons.Reason_Id Defect_Details.Cause2 Event_Reasons.Reason_Id Defect_Details.Cause3 Event_Reasons.Reason_Id Defect_Details.Cause4 Event_Reasons.Reason_Id Defect_Details.Cause_Comment_Id Comments.Comment_Id Defect_Details.Action1 Event_Reasons.Reason_Id Defect_Details.Action1 Event_Reasons.Reason_Id Defect_Details.Action1 Event_Reasons.Reason_Id Defect_Details.Action1 Event_Reasons.Reason_Id Defect_Details.Action_Comment_Id Comments.Comment_Id Defect_Details.Research_Status_Id Defect_Details.Research_Comment_Id Comments.Comment_Id Defect_Details.Research_User_Id Users.User_Id Defect_Details.Event_Id Events.Event_Id Defect_Details.Source_Event_Id Events.Event_Id Defect_Details.PU_Id Prod_Units.PU_Id Defect_Details.Event_Subtype_Id Event_Subtypes.Event_Subtype_Id Defect_Details.User_Id Users.User_Id
3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Transaction Number Defect Detail Id Defect Type Id Cause 1 Cause 2 Cause 3 Cause 4 Cause Comment Id Action 1 Action 2 Action 3 Action 4 Action Comment Id Research Status Id Research Comment Id Research User Id Event Id Source Event Id Unit Id Event Subtype Id User Id Visual Start X Visual Start Y
GE Fanuc Automation
26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
Severity Repeat Dimension X Dimension Y Dimension Z Amount Start Position X Start Position Y End Position X End Position Y Research Open Date Research Close Date Start Time End Time Entry On
Defect_Details.Dimension_X Defect_Details.Dimension_Y Defect_Details.Dimension_Z Defect_Details.Amount Defect_Details.Start_Position_X Defect_Details.Start_Position_Y Defect_Details.End_Position_X Defect_Details.Start_Position_Y Defect_Details.Research_Open_Date Defect_Details.Research_Close_Date Defect_Details.Start_Time Defect_Details.End_Time Defect_Details.Entry_On
GE Fanuc Automation
Order 0 1 2 3 4 5 6 7 8 9 10 11 12 13
Field Name Result Set Type File Number File Name Field Number Field Name Type Length Precision Value Carriage Return Construction Path Final Path Move Mask Add Timestamp
Values/Table Reference 50
Alpha
0 = No 1 = Short 2 = Full
The File Number is relevant only when build multiple files in a single block of records. A default value of 1 will be sufficient when building a single file, otherwise increment it as necessary. The Field Number must increment sequentially throughout the creation of the entire file regardless of the fields column/row position. The overall formatting of the file is determined by the sequence of the fields and the placement of the Carriage Returns. The Field Name can be set to any value. The Construction Path cant be the same as the Final Path. If they are the file will not be created. The Move Mask must include the File Name or the file will not end up in the Final Path. The Move Mask will move all files that match it, regardless of where they came from (i.e. if Move Mask = *.log, then all files that match *.log in the Construction Path will be moved to the Final Path). A simple way to limit this is to make the Move Mask the same as the File Name. Add Timestamp option will add a timestamp to the end of the file name extension. If the option is set to 1, a File Name of Output.dat will become Output.dat120102.
14.13.01 Example
DECLARE @FileOutput TABLE ( ResultSetType FileNumber int DEFAULT 50, int DEFAULT 1,
GE Fanuc Automation
FileName FieldNumber FieldName FieldType FieldLength FieldPrecision FieldValue FieldCR FieldBuildPath FieldFinalPath FieldMoveMask AddTimestamp )
varchar(255) NULL, int IDENTITY, varchar(20) DEFAULT '0', varchar(20) DEFAULT 'Alpha', int NULL, int DEFAULT 1, varchar(255) NULL, int DEFAULT 0, varchar(50) NULL, varchar(50) NULL, varchar(50) NULL, int DEFAULT 0
INSERT INTO @FileOutput ( FileName, FieldLength, FieldValue, FieldCR, FieldBuildPath, FieldFinalPath, FieldMoveMask) VALUES ( Output.dat, 255, MyDataFieldValue, 1, C:\Temp\, C:\Output Directory\, Output.dat) -- Output results SELECT * FROM @FileOutput ORDER BY FieldNumber ASC
GE Fanuc Automation
Description: ========= This routine checks the level of index fragmentation, then defragments the indexes and then reindexes them. Change Date =========== */ Who ==== What ===== RowId int IDENTITY PRIMARY KEY, ObjectId int, IndexId int) int IDENTITY PRIMARY KEY, ObjectName varchar(128), ObjectId int, IndexName varchar(128), IndexId int, Level int, Pages int, Rows int, MinimumRecordSize int, MaximumRecordSize int, AverageRecordSize int, ForwardedRecords int, Extents int, ExtentSwitches int, AverageFreeBytes real, AveragePageDensity real, ScanDensity real, BestCount int, ActualCount int, LogicalFragmentation real, ExtentFragmentation real) varchar(128), int, varchar(128), int, int, int, int, datetime, datetime, real, datetime, datetime, real, real, real, real, int, int, float, float, varchar(25), varchar(25),
DECLARE
@ObjectName @ObjectId @IndexName @IndexId @Debug @Defragment @Reindex @Start_Time @End_Time @Time @Total_Start_Time @Total_End_Time @Total_Time @Query_Time @Defrag_Time @Reindex_Time @Rows @Row @ScanDensity @LogicalFragmentation @DBCCCOMMAND @DBCCOPTIONS
GE Fanuc Automation
@SCANDENSITYLIMIT @FRAGMENTATIONLIMIT
float, float
-------------------------------------------------------------------------------------------------------------Initialization --------------------------------------------------------------------------------------------------------------- Constants SELECT @DBCCCOMMAND = 'DBCC SHOWCONTIG (', @DBCCOPTIONS = ') WITH TABLERESULTS', @SCANDENSITYLIMIT = 90.0, @FRAGMENTATIONLIMIT = 10.0 -- Parameters SELECT @Defrag_Time = 0, @Reindex_Time = 0, @Total_Start_Time = getdate(), @Debug @Defragment @Reindex
-------------------------------------------------------------------------------------------------------------Get Fragmented Indexes -------------------------------------------------------------------------------------------------------------SELECT @Start_Time = getdate() IF @Debug = 1 BEGIN PRINT 'Querying Indexes...' END INSERT INTO @IndexList ( SELECT si.id, si.IndID FROM sysindexes si JOIN sysobjects so ON so.name = si.name AND si.id = so.Parent_obj AND ( so.xtype = 'PK' OR so.xtype = 'UQ') WHERE si.IndID > 0 SELECT @Rows = @@ROWCOUNT, @Row = 0 WHILE @Row < @Rows BEGIN SELECT @Row = @Row + 1 SELECT @ObjectId = ObjectId, @IndexId = IndexId FROM @IndexList WHERE RowId = @Row INSERT #Indexes (ObjectName, ObjectId, IndexName, IndexId, Level, Pages, Rows, MinimumRecordSize, MaximumRecordSize, AverageRecordSize, ForwardedRecords, Extents, ExtentSwitches, AverageFreeBytes, AveragePageDensity, ScanDensity, ObjectId, IndexId)
GE Fanuc Automation
BestCount, ActualCount, LogicalFragmentation, ExtentFragmentation) EXEC (@DBCCCOMMAND + @ObjectId + ',' + @IndexId + @DBCCOPTIONS) END SELECT @Query_Time = convert(real, datediff(s, @Start_Time, getdate()))/60.0 IF @Debug = 1 BEGIN SELECT ObjectName, IndexName, IndexId, ScanDensity, LogicalFragmentation, ExtentFragmentation FROM #Indexes WHERE ScanDensity < @SCANDENSITYLIMIT OR LogicalFragmentation > @FRAGMENTATIONLIMIT ORDER BY ObjectName ASC, IndexName ASC PRINT 'Queried Indexes in ' + ltrim(str(@Query_Time, 25, 2)) + ' min' END -------------------------------------------------------------------------------------------------------------Defragment Indexes -------------------------------------------------------------------------------------------------------------IF @Defragment = 1 BEGIN SELECT @Row = 0 WHILE @Row < @Rows BEGIN SELECT @Row = @Row + 1 SELECT @ObjectName @ObjectId @IndexName @IndexId @ScanDensity @LogicalFragmentation FROM #Indexes WHERE RowId = @Row IF = ObjectName, = ObjectId, = IndexName, = IndexId, = ScanDensity, = LogicalFragmentation
@ScanDensity < @SCANDENSITYLIMIT OR @LogicalFragmentation > @FRAGMENTATIONLIMIT BEGIN IF @Debug = 1 BEGIN PRINT 'Defragmenting ' + @ObjectName + '.' + @IndexName END SELECT @Start_Time = getdate() IF @Debug = 1 BEGIN DBCC INDEXDEFRAG (0, @ObjectId, @IndexId) END ELSE BEGIN DBCC INDEXDEFRAG (0, @ObjectId, @IndexId) WITH NO_INFOMSGS END SELECT @Time = convert(real, datediff(s, @Start_Time, getdate()))/60.0 SELECT @Defrag_Time = @Defrag_Time + @Time IF @Debug = 1
GE Fanuc Automation
BEGIN PRINT 'Defragmented ' + @ObjectName + '.' + @IndexName + ' in ' + ltrim(str(@Time, 25, 2)) + ' min' END END END END -------------------------------------------------------------------------------------------------------------Rebuild Indexes -------------------------------------------------------------------------------------------------------------IF @Reindex = 1 BEGIN SELECT @Row = 0 WHILE @Row < @Rows BEGIN SELECT @Row = @Row + 1 SELECT @ObjectName @IndexName @ScanDensity @LogicalFragmentation FROM #Indexes WHERE RowId = @Row IF = ObjectName, = IndexName, = ScanDensity, = LogicalFragmentation
@ScanDensity < @SCANDENSITYLIMIT OR @LogicalFragmentation > @FRAGMENTATIONLIMIT BEGIN IF @Debug = 1 BEGIN PRINT 'Reindexing ' + @ObjectName + '.' + @IndexName END SELECT @Start_Time = getdate() IF @Debug = 1 BEGIN DBCC DBREINDEX (@ObjectName, @IndexName) END ELSE BEGIN DBCC DBREINDEX (@ObjectName, @IndexName) WITH NO_INFOMSGS END SELECT @Time = convert(real, datediff(s, @Start_Time, getdate()))/60.0 SELECT @Reindex_Time = @Reindex_Time + @Time IF @Debug = 1 BEGIN PRINT 'Reindexed ' + @ObjectName + '.' + @IndexName + ' in ' + ltrim(str(@Time, 25, 2)) + '
min' END END END END -------------------------------------------------------------------------------------------------------------End Game -------------------------------------------------------------------------------------------------------------SELECT @Total_Time = convert(real, datediff(s, @Total_Start_Time, getdate()))/60.0 IF @Debug = 1 BEGIN PRINT 'Finished!' PRINT 'Query Time=' + ltrim(str(@Query_Time, 25, 2)) + ' min' PRINT 'Defrag Time=' + ltrim(str(@Defrag_Time, 25, 2)) + ' min' PRINT 'Reindex Time=' + ltrim(str(@Reindex_Time, 25, 2)) + ' min'
GE Fanuc Automation
GE Fanuc Automation
The embedded sql code installs a SQL Server job that can be used to monitor blocking and parallism issues. This job can only be installed on SQL Server 2000 Service Pack 3 or greater. The job runs on a configurable 1 minute frequency and checks the master..sysprocesses table for blocking issues. If blocking is found, it then records the blocking process, all blocking victim processes and any processes that are currently running queries with parallelism. It can also optionally record the associated locks but it is not recommended to enable that option and leave the job unattended. The following 4 tables are created and populated by the job:
Table Name Local_Blocking_Log Local_Blocking_Victims Local_Blocking_Parallelism Local_Blocking_Locks Description List of the blocking processes. List of the blocking victims processes. List of processes with multiple threads at the time of the blocking. List of the blocking process locks.
The following query is an example of how to look at the data in the table:
SELECT TOP 10 Start_Time, Duration = datediff(s, start_time, end_Time), BlockingSPID = bl.spid, BlockingProgram = bl.program_name, BlockingObject = so.name, BlockingInputBuffer = bl.Event_Info, BlockingText = bl.Text, BlockingEncrypted = bl.Encrypted, VictimSPID = bv.spid, VictimProgram = bv.program_name, VictimObject = vso.name, VictimInputBuffer = bv.Event_Info, VictimText = bv.Text, VictimEncrypted = bl.Encrypted FROM dbo.local_blocking_log bl WITH (NOLOCK) LEFT JOIN sysobjects so WITH (NOLOCK) ON bl.Object_id = so.id LEFT JOIN dbo.local_blocking_victims bv WITH (NOLOCK) ON bl.bl_id = bv.bl_id LEFT JOIN sysobjects vso WITH (NOLOCK) ON bv.Object_id = vso.id ORDER BY bl.BL_Id DESC SELECT top 10 TimeStamp, SPID, Program_Name, Host_Name, Text, Encrypted FROM dbo.Local_Blocking_Parallelism bp WITH (NOLOCK) LEFT JOIN sysobjects so WITH (NOLOCK) ON bp.Object_id = so.id ORDER BY bp.Timestamp DESC, bp.ECId ASC
GE Fanuc Automation
The job utilizes the fn_get_sql() function to retrieve the current running text of the processes (stored in the Text field in all the tables). This usually points to a particular query, which can then be addressed to resolve the blocking.
GE Fanuc Automation