JCL1
JCL1
JCL1
How do you access a file that has a disposition of KEEP??? Ans: Need to supply Volume serial no. VOL=SER=XXXX 2> What is tht diffrance between JES2 and JES3?? Ans: JES3 allocates datasets for all steps before the job is sheduled. In JES2,allocation of datasets required by a step are done only just before the step executes. 3> What PERFORM parameter in EXEC statement is meant for? Ans: In os/390 installation, the system programmer defines several performance groups, which has specific performance characteristics like storage,speed etc. Now,each job class is associated with a default performance group. If one wants to override the default performance group associated with the class he can do it by this parameter- PERFORM=value in the JOB or EXEC stmt. When used in EXEC stmt only that step will be influenced by that performance group. 4> What is the work of initiator? Ans: initiator is a special address register. 5> How to Execute from the second step in the PROC that is used in the JCL?? Ans: On jobcard using RESTART option specify PROCName.StepName you want to restart at. 6> How can I get the number of records of a sequential file without browsing it? Ans: you can use tso command "COUNT" over sequential dataset to know the number of records in it with out browsing it. 7> I have a main program in a pds i.e a.b.c and sub program is in a.b.d. the question is how would be the dd statement? Ans: If you are calling your sub pgm dynamically then DD stmt would be: //SETP01 exec IGYWCL //COBOL.SYSIN DD DSN= NAME OF PDS(MAIN PROGRAM
//LKED.SYSLMOD DD DSN= PDS NAME FOR LOAD MODULE //STEPLIB DD DSN=( NAME OF PDS WHICH CONTAINS THE LOAD MODULE OF SUB PGM) FOR STATIC CALL //STEP01 EXEC IGYWCL //COBOL.SYSIN DD DSN= NAME OF PDS(MAIN PROGRAM //LKED.SYSLMOD DD DSN= PDS NAME FOR LOAD MODULE //LKED.SYSLIB DD DSN=( NAME OF PDS WHICH CONTAINS THE LOAD MODULE OF SUB PGM) 8> Why we use export-import over repro?? Ans: There is much faster way to do the same: - Use FAVER utility to take a back up (very fast). It will contain data as well as all definitions (base cluster, AIX, path, all parameters) - XMIT dataset to hwre is has to go - Use FAVER utility to restore it Also, SORT utility is a faster tool to load VSAM than IDCAMS
8>
What is difference between addressing mode and residential mode?? Ans: Addressing mode specifies the architecture used, ie 24 bit addressing mode or the 31 bit addressing mode,ie, Amode=24 or Amode=31. The residential or Rmode is used to specify whether or not the job exists above or below the 16Mb line, ie, where it is resident. Question How do you convert negative packed decimal number to positive PD using JCL? Ans: JCL is not a programming language, it's a script language for executing programs and defining data that programs use. Write a COBOL program to multiply the negative numbers by -1. Then use JCL to execute the program.
9>
EASY 1>
How can i pass a data from jcl to cobol program?? Ans: There are two ways a> Through Insteram (using sysin dd *) b> Through PARM parameter 2> How do you code insteram data in JCL??? Ans: //SYSIN DD * input data input data . . /* 3> Maximum length of PARM parameter?? Ans: 100 char. 4> What if both JOBLIB and STEPLIB is specified??? Ans: JOBLIB is ignored. 5> What is the diffrance between primary and secondary allocation of dataset?? Ans:Secondary space is done when more space is required than what has already been allocated 6> What is the purpose and meaning of the TIME keyword and what JCL statement is it associated with? Ans: TIME specifies the maximum CPU time allocated for a particular job or job step. If TIME is in the JOB card, it relates to the entire job; if in the EXEC statement, it relates to the job step. we can code TIME=1440 to have maximum time limit so that the job will not be abended also we can code TIME=NOLIMIT or TIME=MAXIMUM. this allow you to have a maximum of 248 days. a job can be executed continuously for 248 days in mainframe. 7> What is the difference between specifying DISP=OLD and DISP=SHR for a dataset? Ans: DISP=OLD --> Exclusive HOLD. Read from beginning of dataset. But if u write, then it will overwrite on existing data. i.e old data is lost. DISP=SHR --> multiple user can share the data. Its read-only file. 8> What is the difference between BLKSIZE and LRECL? Ans: LRECL is the Logical RECord Length (or size of the record in bytes. BLKSIZE is the physical size, in bytes, of all the records that are grouped together into a block of records.Both LRECL and BLKSIZE are expressed as bytes. 9> How can be determined if date contained in the header of a file is date current date? Ans : Use date parameter 10> How will you call the return code of JCL ? Ans: Return Code in JCL can b obtained by the use of COND paramater...it is used to know the return codes of previous steps.
Medium
1>
How to by pass the step without cond parameter?? Ans: By using the IF and END IF IF (rc<0) ---------Endif 2> What are the difference between in-stream procedure and inline procedure in jcl ?? Ans: An in-stream PROC is defined right in the jcl stream and doesn't reside in the proclib. This is an older technique associated with punched cards but it is still in use with setup and installation programs from vendors. For the in-line perform instead of: perform add-paragraph until switch = 'y' you can code perform until switch = 'y' body of code end-perform body of code is the code from add-paragraph appearing in-line (ie right in the perform) instead of in a separate paragraph (out of line code) 3> How do you override a specific DDNAME in PROC from JCL??? Ans: //stepname.dd dsn=... 4> How to check syntax of JCL without running it?? Ans: TYPERUN = SCAN on the JOB card. 5> can you code instrean data in PROC???? Ans: NO 6> How to change default PROCLIB?? Ans: //ABC JCLLIB ORDER=(XY.MYPROCLIB,SYS1.PROCLIB) 7> What is COND=ONLY?? Ans: Execute the stepl only if the previous step terminated abnormally. 8> How do you restart a PROC from perticular step?? Ans: In the Job card ,specify RESTART = procstep.stepname where procstep = Name of the jcl step that invoke proc. stepname = Name of the procstep where you want execution to start. 9> What is the improvement to COND= in the latest version of MVS? Ans: If you use 'IF and END IF', step will get executed when IF is satisfied. In case of COND parameter, step is bypassed when The COND parameter in the Version of MVS is used as condition parameter and if the given condition is true means that step will be bypassed. 10> What is a GDG? How is it referenced? How is it defined? What is a MODELDSCB? GDG stands for generation data group. It is a dataset with versions that can be referenced absolutely or relatively. It is defined by an IDCAMS define generation datagroup execution.
Ans: It is defined using an IDCAMS command DEFINE GENERATIONDATAGROUP or DEF GDG for short. Not many parameters, just a maxgeneration as I recall. If you create a GDG called MY.GDG, this first instance will be cataloged as MY.GDG.G00001V00 and onwards. Each time you reference one you can do so by absolute reference, as above, or relative. MY.GDG(0) which is the most recent generation. MY.GDG(-1) is the next oldest, and so on. To create a new GDG generation, you code in your JCL. DSN=MY.GDG(+1), DISP=(NEW,CATLG) etc as normal. Whatever you created as a max generations, the last one gets dropped off (uncataloged and deleted) assuming you have that many generations. They are useful for backups and the like. If you run a particular batch job on a daily basis, then its often the case that you have a 5 generation (or 7?!) GDG to capture your output or parameters etc. You then have a week to print them or reference them. 11> Can we call one proc from another??? if yes,what is the maximum limit?? Yes we can.Maximum limit is 15.
DB2
The bootstrap data set (BSDS) is a VSAM key-sequenced data set (KSDS). This KSDS contains information that is critical to DB2, such as the names of the logs. DB2 uses information in the BSDS for system restarts and for any activity that requires reading the log. Specifically, the BSDS contains:
An inventory of all active and archive log data sets that are known to DB2. DB2 uses this information to track the active and archive log data sets. DB2 also uses this information to locate log records to satisfy log read requests during normal DB2 system activity and during restart and recovery processing. A wrap-around inventory of all recent DB2 checkpoint activity. DB2 uses this information during restart processing. The distributed data facility (DDF) communication record, which contains information that is necessary to use DB2 as a distributed server or requester. Information about buffer pools.
Because the BSDS is essential to recovery in the event of subsystem failure, during installation DB2automatically creates two copies of the BSDS and, if space permits, places them on separate volumes. The BSDS can be duplexed to ensure availability.
1. Working storage
1. Working Storage copybook: It contains the input or output fields names with the field definitions like type or length of the fields. Instead of defining so many variables in the WS section, we can have it in a separate file like thing. It is also useful in mapping the I/P or O/P records with the specific fields.
SYN: COPY <copybookname> REPLACING ==<fromvariable>== BY ==<tovariable>== EX: COPY COPYBOOK REPLACING ==VAR1== BY ==VAR3==
2. Procedure Division copybook: It contains set of COBOL statements in order to execute it at a particular time. It acts as a sub-program. Copy books are nothing but COBOL code which can be used in multiple programs are written in a separate member.
By using COPY <COPY BOOK NAME> statement(COPY is a compiler directive command), The piece of code in the copy book will get expanded during compilation. Mostly copy books are used to define file structures, any variables structures,which can be defined in multiple programs, also for some common paragraphs(Ex: Abend paragraphs). This does not mean this is the only usage. Even file section, for select clause also we can write a copy books. Any part of your COBOL program can be written into to a separate copy book and using copy command it can be expanded. There is no classification as working storage copy book or procedure division copy book
Major reasons for using COPYBOOKs in assembly language or COBOL are to:
ensure that everyone uses the same version of a data layout definition or procedural
code. make it easier to cross reference where components are used in a system. make it easier to change programs when needed (only one master copy to change). save programmer time by not needing to code extensive data layouts (minor, but useful).
Must be a text file. Each line must end with a period. Field names cannot have parentheses. The level of record type from 1 to 9 must start with a zero (0) so that it is a two-digit number. For example: 05 PDM-ID-UNITGROUP. Must have the PIC to specify the format of the field, such as 5 PDMNUM-PRD PIC 99. Record type occurrences must have OCCURS statements to specify the number of times records repeat. For instance, 05 PDM-CUR-
REGULAR-RATE-DATA occurs six times. The OCCURS clause specifies the number of repeated occurrences of data items with the same format in a table. Note If your data is read one byte short: If your copybook contains a Comp-3 data field, the size of the field varies depending on the schema type you use. Example Problem: Many users choose Cobol 01 or Acucobol-85 as their schema type. This is the best choice in many cases. However, Cobol 01 and Acucobol-85 do not consider a sign as part of the size of the field, so the field may be read one byte short and your data might not line up properly in the Data Parser. Solution: Instead of Acucobol-85 or Cobol 01, use RM-Cobol, MicroFocus Cobol or Fujitsu Cobol schema types, since they do recognize the sign as part of the data field. REDEFINES in COBOL Copybooks The REDEFINES clause allows you to have multiple field definitions for the same piece of storage. The same data then can be referenced in multiple ways. Take a simple example, useful for data validation: 01 WS-NUMBER-X PIC X(8). 01 WS-NUMBER REDEFINES WS-NUMBER-X PIC 9(6)V99. This portion of the data division only consumes 8 bytes of storage, not 16. Each of the two PIC clauses is describing the same 8 bytes of data, just doing it differently. Two common uses of REDEFINES in Cobol copybooks exist:
Type 1: The "simple" in-place REDEFINES is typically used to define a certain block of bytes (for example, a date or Social Security Number) both as one complete field and also as a series of subfields. In such cases, the Map Designer default behavior of picking the most "complex" REDEFINE (the one with the most subfields) is very useful, because you get maximum mapping
control right down to the smallest fields. Type 2: The "multiple record type" REDEFINES is where the 01 structured schema has a fixed header at the front, and then proceeds to define all record types, all within the one 01 Layout. These record types are often defined at a high level, such as 03 or 05. Map Designer default behavior does not suffice if you want all of the different record types to be recognized simultaneously. The best option is to move the 03 or 05 record types up to an 01 level and drop the REDEFINES. Just remember to keep the "fixed" header portion for each one. Note that there could still be the Type 1 simple REDEFINES embedded within any or all of the multiple record types (however, they should be handled automatically). COBOL Copybook Tutorial The following describes how to create a copybook: http://www.csis.ul.ie/cobol/Course/Tables2.htm See Also Structured Schema Designer Window on page 2-2 of the Data Parser Users Guide Using an External Dictionary File Schema on page 3-18 in the Data Parsers Users Guide
Provides catalog and change management tools and high speed utilities to support objects throughout the application life cycle Enables version control for added flexibility Recovers inadvertently dropped DB2 objects and enables you to back-out changes Provides dynamic allocation, automatic restart, and automated error correction Includes an integrated table editor
Improve productivity by simplifying DB2 catalog navigation and change management Mitigate risk by synchronizing changes with related objects Enhance application availability by performing maintenance with minimal outages Reduce costs of unloading, loading, and copying DB2 data
1.2
Simplifies tasks associated with creating, altering, and dropping objects, maintaining user privileges, generating DB2 and BMC utility JCL, and executing DB2 commands Provides easy DB2 catalog navigation and extensive reporting Imports, edits, generates, tests, explains, and analyzes SQL Includes an integrated table editor Creates an audit of DB2 catalog activity and enables drop recovery logging to quickly restore the log, when needed
Improve DBA productivity by eliminating the need for extensive knowledge of the DB2 catalog table structure or the SQL that is required to query it Ensure integrity with the ability to recover inadvertently dropped DB2 objects easily
Automates all aspects of the schema management process, including changing structures, moving data, and tracking changes in all DB2 environments all while properly managing ERP and CRM applications
Ensures that a planned change is implemented correctly, the first time, without costly mistakes Enables you to roll changes forward or back, as needed, and to back-out changes Provides profiles to define and control repetitive change, migration, or baseline processes Makes changes to multiple databases and keeps them in sync, if desired
Ensure data integrity throughout the change management process Improve application availability by reducing the time it takes to perform structural changes and by executing changes in parallel Eliminate manual change management processes
Unloads DB2 tables more quickly, using fewer resources than the native DB2 utility Converts DB2 data to other formats and writes it to any destination Captures data from a specific point in time either unloading from an image copy prior to the most recent one or unloading from a consistent point in time while the table space is online and available to applications
Eliminates processing of partitions that do not meet the selection criteria or, if desired, unloads at partition and table-space level Manages its own buffering, performs I/O operations at the lowest level possible, processes objects in one pass, and runs outside of DB2 Eliminates the need to execute RUNSTATS, sort data, and reorganize indexes separately
Reduce the cost of unloading DB2 data Reduce JCL maintenance by dynamically allocating unload data sets and new data sets when object definitions change (such as when partitions or a new table are added) Improve data availability
Loads DB2 tables more quickly, using fewer resources than the native DB2 utility Analyzes and allocates memory and CPU resources for maximum performance and availability Eliminates the need to execute RUNSTATS utility separately Automatically calculates optimal work file sizes and dynamically allocates work file and copy data set space
Sorts input data, dynamically allocates work files and image copy data sets, rebuilds and reorganizes indexes, produces copies, and collects DB2 object statistics during the load process
Reduce the cost of loading DB2 data Ensure data integrity by verifying that data is correct before replacing or adding it to existing data and preventing the loading of incorrect data caused by JCL errors Improve data availability
Backs up data more quickly and uses fewer resources than the native DB2 copy utility Copies all related data sets automatically and quiesces groups of table spaces related to application recovery, if needed Collects RUNSTATS during the copy, so you dont need to run a separate job Creates encrypted image copies Writes compressed copies to save on storage for image copy tapes Writes copies to DASD and tape simultaneously
Improve data availability Offload copy work to zIIP engines, where available Ensure that all copies are in sync with DB2
Provides for near-continuous availability for utility processing with BMC utilities Exploits intelligent storage devices from IBM, EMC, and Hitachi Provides very fast recovery capability by creating point-in-time copies Takes snapshot copies while allowing updates in parallel which is faster and more efficient than serial processing
Creates snapshot copies that can be used for recovery, unload, and reorg tasks
Improve data availability Maintain peak system availability across the sysplex by providing extensive monitoring of Coupling Facility usage and performance Reduce outages and costs by enabling non-disruptive maintenance in a data-sharing environment
Bufferpool 1 256,000 pages, Page Size: 4k Bufferpool 2 64,000 pages, Page Size: 32k
Entry Criteria : SRS/BRS,Developement Plan,TRM Exit Criteria: Reviewed Testplan Entrance criteria: 1)All source codes are unit tested 2)All QA resource has enought functional knowledge 3)H/W and s/w are in place 4)Test plans and test cases are reviewed and signed off Exit criteria: 1)No defect over a perod of time or testing effort 2)Planned deliverables are ready 3)High severity defects are fixed
Specify the testing stage for this Test Summary Report. Specify a unique Test Specify the Summary mm/dd/yy identifier Case unique test case Review assigned to the Number: number. Date: test log. TEST SUMMARY Identify the items tested, any relevant version/revision level and summarize the evaluation of the test items. Reference any pertinent system test documents such as the test case, test log, and incidents. Report observed variances of the test items from the test cases. Specify known reasons for each variance found. Report observations on the breadth of the testing process based upon the system test documents, test plan, and test cases. Summarize the overall results of the testing process. Identify all anomalies or incidents found during testing, discuss incident resolutions, and any unresolved incidents. Provide an overall evaluation based upon the test results and the number of incidents resolved or unresolved. Summarize the testing activities and the next testing steps. Identify any unresolved requirements or incidents not completed with this phase of testing and include them in the CAP. APPROVALS
Test Summary: Variances: Assessment: Summary of Results: Evaluation: Corrective Action Plan (CAP):
<Signature>: Identify the person and title that must authorize and sign off for this testing stage. Once all signatures are obtained, the next testing stage/implementation may begin. <Signature>: Identify the person and title that must authorize and sign off for this testing stage. Once all signatures are obtained, the next testing stage/implementation may begin. <Signature>: Identify the person and title that must authorize and sign off for this testing stage. Once all signatures are obtained, the next testing stage/implementation may begin.
Implemented In: This column should be populated with the module that the functional requirement has been implemented in. Verification: This column should be populated with a description of the verification document linked to the functional requirement. Additional Comments: This column should be populated with any additional comments
The Matrix should be created at the very beginning of a project because it forms the basis of the projects scope and incorporates the specific requirements and deliverables that will be produced. The Matrix is considered to be bi-directional. It tracks the requirement forward by examining the output of the deliverables and backward by looking at the business requirement that was specified for a particular feature of the product. The RTM is also used to verify that all requirements are met and to identify changes to the scope when they occur. Requirements <-> RFP <-> Design/Task <-> Deliverables <-> Verification The use of the RTM enhances the scope management process. It also assists with the process control and quality management. RTM can also be thought of as a process of documenting the connection and relationships between the initial requirements of the project and the final product or service produced.