Hierarchy
⤷ CA-FS-ARE (Application Component) Archiving Engine
⤷ ARFA_ARCHIVING_FACTORY (Package) Archiving Factory
Basic Data
Data Element | TYP_ACTION00 |
Short Description | Introduction |
Data Type
Category of Dictionary Type | D | Domain |
Type of Object Referenced | No Information | |
Domain / Name of Reference Type | XFELD | |
Data Type | CHAR | Character String |
Length | 1 | |
Decimal Places | 0 | |
Output Length | 1 | |
Value Table |
Further Characteristics
Search Help: Name | ||
Search Help: Parameters | ||
Parameter ID | ||
Default Component name | ||
Change document | ||
No Input History | ||
Basic direction is set to LTR | ||
No BIDI Filtering |
Field Label
Length | Field Label | |
Short | 10 | Introd. |
Medium | 15 | Introduction |
Long | 20 | Introduction |
Heading | 10 | Introd. |
Documentation
The Archiving Factory allows you to define archiving scenarios and deletion scenarios. To guarantee optimum system performance, business objects need to be removed from the operational system when they have reached the end of their business life cycle. The non-operational business objects are then stored in a medium away from the operational database. These stored business objects can be selected and interpreted using business views from the operational system.
The Archiving Factory is a hierarchical development tool for configuring, creating, and implementing business archiving scenarios and deletion scenarios.
Scenarios defined using the Archiving Factory (design time) can be run in the Archiving Engine (runtime). The Archiving Factory saves and manages all metadata relevant for a scenario in a repository (such as archiving objects and deletion objects involved, business check (context check on business objects), data retrieval for individual business objects, selection procedures for the business objects (analysis), business rejection reasons, business views, residence times, and so on).
Technical aspects are also defined and managed with the Archiving Factory, for example, parallel processing procedures, analysis strategies, logging, time frame for data reloading in archiving scenarios in absolutely exceptional cases.
The Archiving Engine evaluates this metadata at runtime and thereby enables non-interactive volume reduction in the operational database.
The Archiving Monitor is an explanation component and reflects progress in the Archiving Engine. The Archiving Monitor can be used to estimate resource planning, to deal with mass data volumes at different times.
In the onset phase, in operational use and in maintenance and testing, Archiving Engine Diagnostics can be used to simulate and analyze the business check, residence time determination, and data retrieval from the database for specifically addressed business objects in the mass data environment. This allows potential error situations to be identified and reproduced effectively.
Basic Information for Use
The toolbar contains the following functions:
- Create or change the hierarchy
- Consistency check for individual or all archiving scenarios and deletion scenarios (inconsistencies in the hierarchy definition are listed as a log for postprocessing)
- Save (including ADK comparison)
- Navigation to ADK (Archive Development Kit)
- Physical and logical path definition for storing the generated archive files.
- AIS definition (Archive Information System) for definition of fields and the index table for finding archived business objects (-> logical connection between operational system and archive, administration of pointers to business objects)
- Recording of individual scenarios or the complete hierarchy in a transport request
- Navigation to transport system
Each node in the hierarchy represents a process step. When you click the detail button, the detail view for the process node appears to the right, together with the relevant documentation.
Subnodes and whole subtrees (with default values) can be created below the individual nodes. You can change, overwrite, or create new default values according to your requirements.
Double-click in the detail view to go to the development object in the relevant editor. You can change, define entries for, and implement these development objects (tables, structures, function modules, reports, data elements) according to your requirements using the appropriate editor.
In change mode, when development objects that do not yet exist are stored in the field, these development objects are created in the system when you double-click on the development object name. The system then navigates to the relevant editor automatically, for entry definition or implementation. The relevant interface is created automatically for function modules.
On scenario level, you define the basic parameters for the archiving or deletion process flow. The relevant tab pages are provided on the detail screen.
Each scenario is divided into a read section (data retrieval) and a check section (business check).
Each section is divided again into plug-ins. In the standard system, each of the two sections contains one plug-in only.
In the read section, a plug-in is a logical combination of tables with their read modules.
In the check section, a plug-in is a logical combination of business checks (function modules).
Note 1
A plug-in can be used by multiple scenarios simultaneously (plug-in sharing). This enables reuse of business checks and data retrieval. Plug-ins are the logical equivalent to archiving and deletion classes.
Reused plug-ins are not usually necessary and should only be used with care in exceptional cases.
Note 2
There are nodes (administration nodes) in the hierarchy which do not require entries in the detail screen. This is stated in the related detail screen. An example of an administration node is the node 'Read Section'.
History
Last changed by/on | SAP | 20110908 |
SAP Release Created in | 40 |