Hierarchy
⤷ FI-GL-REO (Application Component) General Ledger Reorganization
⤷ FAGL_REORGANIZATION_FW (Package) Reorganization - Framework
Basic Data
Data Element | FAGL_R_JOB_NR_OF_BATCH |
Short Description | Number of Batch Jobs for Parallel Processing |
Data Type
Category of Dictionary Type | D | Domain |
Type of Object Referenced | No Information | |
Domain / Name of Reference Type | NUMC2 | |
Data Type | NUMC | Character string with only digits |
Length | 2 | |
Decimal Places | 0 | |
Output Length | 2 | |
Value Table |
Further Characteristics
Search Help: Name | ||
Search Help: Parameters | ||
Parameter ID | ||
Default Component name | ||
Change document | ||
No Input History | ||
Basic direction is set to LTR | ||
No BIDI Filtering |
Field Label
Length | Field Label | |
Short | 10 | No. Jobs |
Medium | 15 | Number of Jobs |
Long | 20 | Number of Jobs |
Heading | 11 | No. of Jobs |
Documentation
Definition
Number of jobs over which parallel processing is distributed.
Use
You can run the following mass activities in parallel:
- Generate object list
- Reassignment
- Transfers
When one of the above activities is called, the system runs a dispatching program that divides the account assignment objects for processing into subpackages of a defined size.
The Number of Jobs parameter controls how many jobs are created to process the generated subpackages in parallel.
With the default setting, parallel processing uses three jobs. However, you can specify another number.
Dependencies
Note that there needs to be a sufficient number of batch processes, including any additional batch processes that are running (for example, dispatching program or jobs scheduled to be run periodically).
The dispatching program is not completed until the specified number of jobs has actually been started.
Performance during processing is influenced by the number of jobs processed in parallel and by the size of the subpackages created. Note the following:
- You can influence the overall performance of a parallel process by selecting a subpackage size that enables the data to be processed to be distributed more or less evenly. For example, you can group very frequently assigned materials together while grouping less frequently assigned materials together. As a general rule: The smaller the subpackages, the more evenly the processing effort is distributed across the jobs to be run.
In the case of a small number of large subpackages, it is possible that some jobs are finished while one or more jobs continue processing, for a considerable amount of time, subpackages in which frequently used objects are processed, for example.
- However, very small subpackages and a very large number of jobs in parallel processing can negatively affect performance because parallel processing itself then needs to be managed (involving the distribution of work packages and database accesses, and so on).
Example
If you enter 10 as the number of jobs, 10 jobs are started in addition to the dispatching program, and these jobs process in parallel the dataset for processing.
If you enter 1 as the number of jobs, 1 job is started in addition to the dispatching program, and this job processes on its own the dataset for processing. In this case, there is no parallel processing.
If you enter 0 as the number of jobs, no additional job is started. Processing is performed entirely by the dispatching program. In this case, there is no parallel processing.
History
Last changed by/on | SAP | 20100310 |
SAP Release Created in | 605 |