29 Nov 2024

Oracle dataguard Interview Questions - Part 2

 

Dataguard Interview Questions Part1 for Part 1

11. How can you monitor the status of Oracle Data Guard?

  • Answer:
    • Using the Data Guard Broker (dgmgrl command-line tool or Enterprise Manager).
    • Checking the alert logs of both the primary and standby databases.
    • Monitoring the log transport and apply processes (e.g., v$archive_dest_statusv$dataguard_status).
    • Running queries like SELECT * FROM v$dataguard_stats; to gather statistics.
    • Ensuring the Data Guard broker is running and using its commands to check status.

12. What are the prerequisites for setting up Oracle Data Guard?

  • Answer:
    • A primary Oracle database and one or more standby databases.
    • Same version and patch level for both the primary and standby databases.
    • Proper network configuration between primary and standby.
    • Archive log mode enabled on both primary and standby databases.
    • Flashback technology enabled for fast recovery.
    • A proper backup strategy.

13. How can you convert a physical standby to a logical standby?

  • Answer: You can convert a physical standby to a logical standby by:
    • Using the DBMS_LOGSTDBY package to prepare the physical standby for conversion.
    • Ensuring that all redo logs have been applied to the standby.
    • Creating a new logical standby using the ALTER DATABASE CONVERT TO LOGICAL STANDBY command.

14. What is the role of Flashback in Oracle Data Guard?

  • Answer: Flashback technology enables you to quickly recover from human errors, allowing you to "flash back" to a previous point in time. In the context of Data Guard, Flashback can be used to recover the standby database to a point before a failure or corruption, providing a fast recovery solution.

15. Can a Data Guard setup work with non-Oracle databases?

  • Answer: No, Oracle Data Guard is specifically designed for Oracle databases. It is not compatible with non-Oracle databases. However, Oracle GoldenGate can be used for replication and data integration between Oracle and non-Oracle databases.

16. How do you perform a role transition in Data Guard?

  • Answer: You perform role transitions using switchover or failover operations:
    • Switchover: A planned transition between the primary and standby roles, typically done for maintenance.
    • Failover: An unplanned role transition triggered by the failure of the primary database.

17. What is Data Guard’s "Apply Lag" and how do you monitor it?

  • Answer: Apply Lag is the delay in applying redo logs on the standby database compared to the primary. You can monitor it using v$dataguard_stats or querying v$archive_dest_status to track the log shipping and applying status.

18. What happens if the Data Guard configuration gets out of sync?

  • Answer: When a Data Guard configuration gets out of sync, the standby database may fall behind in applying the redo logs. It is important to monitor the system and resolve this issue by applying missing logs or rebuilding the standby.

19. Can Oracle Data Guard be used with RAC (Real Application Clusters)?

  • Answer: Yes, Oracle Data Guard can be configured with Oracle RAC. Each instance of the RAC cluster can be part of the Data Guard configuration, and the redo logs are transported from each RAC node in the primary database to the standby databases.

20. What is Data Guard "Log Shipping" and how is it configured?

  • Answer: Log shipping refers to the process of transferring redo logs from the primary database to the standby. It is configured by setting the archive log destinations in the init.ora file and by ensuring proper network connectivity between the primary and standby systems.


Oracle Data Guard interview questions Part 1

 Here are some common Oracle Data Guard interview questions that could be asked during a job interview:

1. What is Oracle Data Guard?

  • Answer: Oracle Data Guard is a disaster recovery and data protection solution for Oracle databases. It maintains one or more standby databases as copies of the production database. These standby databases can be used for failover, data protection, and offloading read-only query operations.

2. Explain the types of Standby Databases in Oracle Data Guard.

  • Answer:
    • Physical Standby Database: A replica of the primary database, maintaining an exact binary copy. It can be opened in read-only mode for reporting purposes.
    • Logical Standby Database: A database that uses SQL to apply changes made to the primary database. It allows read-write operations, so it's more flexible than a physical standby.
    • Snapshot Standby Database: A physical standby database that can be opened for read-write activities temporarily while still maintaining its ability to apply logs from the primary database once it reverts to a physical standby.

3. What is the difference between synchronous and asynchronous Data Guard?

  • Answer:
    • Synchronous Data Guard (Maximum Availability Mode): Data is written to both the primary and standby databases synchronously. This ensures no data loss, but there may be some performance overhead due to network latency.
    • Asynchronous Data Guard (Maximum Performance Mode): Redo logs are transmitted to the standby database asynchronously. The primary database does not wait for acknowledgment from the standby, which reduces performance impact but may allow data loss in case of a failure.

4. What are the different Data Guard protection modes?

  • Answer:
    • Maximum Protection: Ensures no data loss by requiring that all redo log writes are completed on both the primary and standby before the commit is acknowledged.
    • Maximum Availability: Provides a balance between performance and data protection. It ensures data is replicated to the standby but allows for some performance trade-offs.
    • Maximum Performance: Focuses on minimizing the performance impact on the primary database by using asynchronous transmission.

5. What is a Data Guard Broker?

  • Answer: Oracle Data Guard Broker is a management and automation tool for Data Guard configurations. It provides an easy-to-use interface for configuring, monitoring, and managing Data Guard. It helps automate failover and switchover operations, and simplifies the management of Data Guard environments.

6. What is a switchover operation in Data Guard?

  • Answer: A switchover is a planned role reversal between the primary and standby databases, where the primary becomes the standby and vice versa. This operation allows maintenance to be performed on the primary database without data loss. It is often done for system upgrades or maintenance.

7. What is failover in Oracle Data Guard?

  • Answer: Failover is the automatic or manual process of switching the role of the standby database to become the primary database in case of a failure of the primary database. This can occur without manual intervention, but it may result in some data loss if in asynchronous mode.

8. What is Data Guard log transport and log apply services?

  • Answer:
    • Log Transport Services (LTS): Responsible for transporting redo data from the primary database to the standby database. It handles log file transfer.
    • Log Apply Services (LAS): Responsible for applying the redo logs received by the standby database. This ensures that the standby database is kept in sync with the primary.

9. How does Oracle Data Guard handle redo log transportation?

  • Answer: Redo logs from the primary database are transmitted to the standby database using either synchronous or asynchronous modes. The transport is done over the network, and the logs are stored on the standby database’s archive log directory. The Data Guard can use features like Real-Time Apply to apply redo logs immediately after they are received.

10. What are some common troubleshooting steps in a Data Guard environment?

  • Answer: Common troubleshooting steps include:
    • Checking the Data Guard configuration using dgmgrl.
    • Verifying network connectivity between primary and standby servers.
    • Reviewing the Data Guard logs and alert logs for errors.
    • Verifying that redo logs are being shipped from primary to standby.
    • Ensuring the correct application of redo logs on the standby database.
    • Checking the configuration of archive log destinations.
    • Ensuring that the standby database is not in a "MOUNTED" state for too long.

Click Here for more Interview Questions related to Oracle Dataguard

Oracle Cloud DBCS interview Questions for beginners - Part 2

 

11. What is Oracle Cloud Autonomous Transaction Processing (ATP)?

  • Answer: Oracle Autonomous Transaction Processing (ATP) is an Oracle Autonomous Database optimized for transaction-based workloads. It supports SQL and PL/SQL applications, and it automates tasks such as scaling, patching, and backup. ATP is ideal for OLTP (Online Transaction Processing) applications.

12. How do you monitor a database in Oracle DBCS?

  • Answer:
    • Use Oracle Cloud Console to monitor the health, performance, and resource utilization of the database.
    • Oracle Enterprise Manager (OEM): A web-based tool to monitor and manage Oracle databases.
    • Cloud Monitoring Services: Provides real-time metrics such as CPU usage, memory, storage, and disk I/O for Oracle DBCS instances.

13. What is the role of Oracle Cloud Storage in DBCS?

  • Answer: Oracle Cloud Storage is used to store database backups, data files, and logs for Oracle DBCS instances. It is highly secure and scalable, allowing you to store and manage large amounts of data with redundancy and disaster recovery options.

14. What is the difference between a Virtual Machine and a Bare Metal instance in Oracle Cloud?

  • Answer:
    • Virtual Machine (VM): A virtualized computing environment that shares physical resources with other VMs. VMs are more flexible and easier to scale.
    • Bare Metal Instance: A physical server dedicated to a single tenant, providing complete control over the hardware and resources. It offers better performance but lacks some of the flexibility of virtual machines.

15. What are some common use cases for Oracle DBCS?

  • Answer: Common use cases include:
    • OLTP (Online Transaction Processing): For applications requiring fast transactional processing.
    • Data Warehousing: Running large-scale data analysis and reporting workloads.
    • Disaster Recovery: Using Oracle DBCS with features like Data Guard for business continuity.
    • Development and Testing: Providing isolated database environments for development teams.

16. What is a Cloud Firewall in Oracle Cloud?

  • Answer: Oracle Cloud Firewall provides security by controlling inbound and outbound traffic to your Oracle Cloud resources (like DBCS instances). It allows administrators to set up rules based on IP addresses, ports, and protocols to control access to cloud resources.

17. Explain the concept of "Scaling" in Oracle Cloud DBCS.

  • Answer: Scaling in Oracle Cloud DBCS refers to adjusting the resources allocated to a database instance, such as CPU, memory, and storage. You can scale up (add more resources) or scale down (reduce resources) to meet changing performance or capacity requirements. Oracle provides both vertical scaling (scaling resources within the same instance) and horizontal scaling (adding more instances for workload distribution).

18. What are the key components of Oracle Cloud Infrastructure (OCI) used in DBCS?

  • Answer: Key components include:
    • Compute (VM or Bare Metal Instances): Provides the underlying compute resources for running the database.
    • Storage: High-performance block storage and object storage to store data and backups.
    • Networking: Virtual Cloud Network (VCN), subnets, and load balancers for connecting resources.
    • Identity and Access Management (IAM): Controls user access to Oracle Cloud resources.


Oracle Cloud DBCS interview Questions for beginners- Part 1

 Here are some Oracle Cloud Database Cloud Service (DBCS) interview questions tailored for beginners, covering essential concepts and basic understanding of Oracle Cloud Infrastructure (OCI) and its database offerings:

1. What is Oracle Cloud Database Cloud Service (DBCS)?

  • Answer: Oracle Cloud Database Cloud Service (DBCS) is a fully managed database service that allows you to run Oracle databases on the cloud. It provides various options such as Oracle Autonomous Database, Oracle Database 12c, 18c, and 19c on Oracle Cloud Infrastructure (OCI), enabling businesses to manage, scale, and secure databases without worrying about hardware or infrastructure management.

2. What are the different types of databases available in Oracle Cloud?

  • Answer:
    • Autonomous Database (ADB): An autonomous, self-managing database service that automates routine tasks like patching, backups, and tuning.
    • Oracle Database Cloud Service (DBCS): A managed database service for running traditional Oracle databases.
    • Exadata Cloud Service: A high-performance, highly available database solution optimized for large workloads.

3. What is the difference between Oracle Autonomous Database and Oracle Database Cloud Service (DBCS)?

  • Answer:
    • Autonomous Database is a cloud-native service that automates administrative tasks such as patching, backups, scaling, and tuning. It uses AI and machine learning to optimize database performance.
    • DBCS is a more traditional managed database service that provides Oracle databases in the cloud but requires manual intervention for tasks like patching, backups, and scaling.

4. What is Oracle Cloud Infrastructure (OCI)?

  • Answer: Oracle Cloud Infrastructure (OCI) is a cloud computing platform provided by Oracle. It includes services for computing, storage, networking, and databases, enabling organizations to run workloads and applications in a secure and scalable cloud environment. OCI provides a foundation for running Oracle databases and other enterprise applications.

5. How do you create a database in Oracle Cloud Database Cloud Service (DBCS)?

  • Answer:
    • Log in to the Oracle Cloud Console.
    • Navigate to Databases > Oracle Database Cloud Service.
    • Click on Create Database and select the required configuration (e.g., Oracle version, storage options, etc.).
    • Configure the database settings like name, administrative passwords, and network configurations.
    • Launch the database instance and monitor the provisioning process.

6. What are the advantages of using Oracle DBCS over on-premise databases?

  • Answer:
    • Scalability: Easily scale up or down without worrying about hardware.
    • Cost Efficiency: Pay only for what you use with a subscription-based pricing model.
    • Managed Service: Oracle handles maintenance, patching, and backups.
    • High Availability: Oracle DBCS provides built-in high availability options, reducing downtime.
    • Disaster Recovery: Built-in disaster recovery options such as database backups to Oracle Cloud Storage.

7. What is Oracle Data Guard in Oracle DBCS?

  • Answer: Oracle Data Guard is a feature that provides high availability, data protection, and disaster recovery for Oracle databases. It involves creating and maintaining a standby database that mirrors the primary database. In case of a failure, the standby database can take over, minimizing downtime.

8. Explain the concept of backups in Oracle DBCS.

  • Answer:
    • Oracle DBCS supports automatic backups that are scheduled daily and stored in Oracle Cloud Storage. These backups are incremental and ensure data protection.
    • You can configure backup retention periods, choose between full or incremental backups, and restore databases from backups when necessary.
    • Backups can be done manually or configured as part of the automated backup process.

9. What is Oracle Cloud Infrastructure (OCI) Networking?

  • Answer: OCI Networking provides the underlying network infrastructure required for connecting cloud resources. It includes components such as Virtual Cloud Networks (VCNs), subnets, route tables, security lists, and load balancers to manage and control network traffic between cloud resources and on-premise systems.

10. How does Oracle DBCS ensure high availability?

  • Answer:
    • Oracle RAC (Real Application Clusters): For distributed database environments, RAC ensures high availability by allowing multiple database instances to run on different servers and access the same database storage.
    • Backup and Restore: Automated backups and the ability to restore databases to a previous state in case of failure.
    • Data Guard: Provides real-time data replication to a standby database, which can be activated if the primary database fails.


Click Here for Oracle Cloud DBCS interview Questions for beginners- Part 2

Oracle Application 12.2.x Dual Filesystem Key features

 In Oracle E-Business Suite (EBS) 12.2, DualFS (short for Dual File System) refers to a feature introduced to support online patching and the zero-downtime patching process. It is used in conjunction with the "online patching" methodology that Oracle introduced starting with 12.2 to enable more efficient patching with minimal disruption to users.

Key Concepts of DualFS in Oracle EBS 12.2:

  1. Separation of Run and Patch File Systems:

    • In Oracle EBS 12.2, the Dual File System divides the Oracle E-Business Suite filesystem into two main parts: a "run" file system and a "patch" file system.
    • The run file system contains the currently active version of your applications and is what users interact with.
    • The patch file system contains the patched version of the Oracle Applications files.
    • The two filesystems allow you to apply patches in a dual-phase approach—one for the run system and one for the patch system.
  2. Online Patching with DualFS:

    • Oracle 12.2 allows you to apply patches without needing to take the entire system down. While patches are being applied to the patch file system, users can continue using the run file system.
    • The patching process involves copying the patched files into the patch filesystem while the system is live. After patches are applied, the system switches over to the patch filesystem with minimal downtime (essentially, zero-downtime patching).
  3. Two Modes of Operation:

    • Run Mode: This is the mode where the system is actively used by end-users, and they interact with the applications.
    • Patch Mode: This is where patches are applied to the "patch" file system. Once patches are successfully applied and validated, the system is switched to use the new patched version.
  4. File System Switch (fs_clone):

    • Once the patching is completed on the patch filesystem, a switch is made between the run and patch file systems using the fs_clone utility. The run file system is switched to the patch file system, making it the active environment for users.
    • This switch takes very little time (a few seconds or minutes), and it does not require taking the entire application down, which results in minimal disruption to end-users.
  5. Benefits of DualFS:

    • Minimized Downtime: The most significant advantage of DualFS is zero-downtime patching. The system doesn't need to be taken offline for long periods during patch application.
    • Concurrent Patching: Allows patches to be applied concurrently with the normal running of the applications, without disrupting active users.
    • Flexibility in Maintenance: Patching and maintenance can be done during regular business hours, greatly reducing the need for extensive maintenance windows.
  6. Patch Lifecycle:

    • When Oracle releases patches, they can be applied incrementally and sequentially. After applying each patch, the system can be validated, and only once all patches are successfully applied can the final switch be made from the patch file system to the run file system.


By utilizing DualFS, Oracle E-Business Suite 12.2 significantly reduces the impact of patching on business operations, enabling enterprises to maintain a more agile and up-to-date system while ensuring continuity of service.

Options for ASM disk Rebalancing

 

Additional Options for Rebalancing:

  1. Check the Rebalance Status: You can monitor the progress of the rebalance process using the following query:


    SELECT * FROM V$ASM_OPERATION;

    This will show the current rebalancing status, including the percentage of completion and any other relevant details.

  2. Cancel Rebalance: If you need to cancel a rebalance operation for any reason, you can use:


    ALTER DISKGROUP <diskgroup_name> CANCEL REBALANCE;
  3. Set Rebalance Power (Speed): You can control the speed of the rebalance operation by adjusting the rebalance power. A higher power value results in a faster rebalance but uses more CPU resources. The default is 1 (low speed), and you can increase it up to 11 (fastest).

    Example:


    ALTER DISKGROUP <diskgroup_name> REBALANCE POWER 8;

    This would set the rebalance speed to level 8.

  4. Force a Rebalance for Specific Disks: If you need to force the rebalance operation for specific disks in a disk group, you can use the FORCE option with the ALTER DISKGROUP command:


    ALTER DISKGROUP <diskgroup_name> REBALANCE FORCE;

Considerations:

  • Rebalance Time: Depending on the amount of data and the number of disks involved, the rebalancing process can take some time. It's best to schedule it during off-peak hours to minimize the impact on performance.
  • Diskgroup Operations: The rebalance operation redistributes the data across the disks in the disk group to ensure optimal usage of available storage.

Command to Rebalance ASM Disks:

 To rebalance ASM (Automatic Storage Management) disks in Oracle, you can use the ALTER DISKGROUP command in SQL*Plus or Oracle SQL Developer. Rebalancing typically occurs after disk additions, removals, or other changes to the ASM disk group.


ALTER DISKGROUP <diskgroup_name> REBALANCE;

Where <diskgroup_name> is the name of the ASM disk group that you want to rebalance.

Example:


ALTER DISKGROUP DATA REBALANCE;

This will trigger the rebalancing process for the DATA disk group.


To Monitor the progress of ASM rebalancing , query v$asm_operation table

Click Here for More Details on ASM rebalancing

Oracle 19c Enhancements for Multitenant Architecture

 
Oracle 19c introduces several important updates to enhance multitenant functionality. Some of the most notable improvements include:
  • Automatic PDB Relocation: Oracle 19c introduces automatic relocation of pluggable databases, which can help organizations balance workloads more effectively. This feature simplifies resource management and ensures that databases are running on the most optimal physical infrastructure.

  • Plug-in and Plug-out for Pluggable Databases: In Oracle 19c, the process of plugging and unplugging PDBs is simplified. You can now migrate and transport PDBs between CDBs with minimal downtime and effort.

  • PDB Snapshot Copies: Oracle 19c allows users to create snapshot copies of pluggable databases, which can be used for backup, testing, or staging. This feature enhances the ability to perform data recovery and testing without affecting the production environment.

  • Enhanced Data Guard Support for Multitenant Environments: Oracle 19c improves support for multitenant environments in Data Guard. This feature ensures high availability by replicating the entire container database, including all its pluggable databases, to a standby system.

Use Cases for Oracle 19c Multitenant Architecture

  1. Database Consolidation for Enterprises

    • Large enterprises often have hundreds or thousands of databases to manage. Oracle’s multitenant architecture allows businesses to consolidate databases from different departments, regions, or business units into fewer, more efficient instances. This reduces operational complexity and IT overhead while providing better resource utilization.
  2. Cloud Deployments

    • Oracle multitenant architecture is particularly useful in cloud environments, where elasticity and scalability are key. It allows businesses to quickly spin up new databases for applications in the cloud while sharing resources across multiple databases. This ensures a cost-effective, flexible cloud strategy.
  3. Testing and Development Environments

    • Developers and testers can use pluggable databases as isolated environments for testing applications, experimenting with different configurations, or staging new versions. By using PDB cloning, developers can quickly create copies of production databases for testing with minimal storage overhead.
  4. SaaS Providers

    • SaaS providers can use Oracle’s multitenant architecture to host multiple customers’ databases in the same container database. Each customer gets a fully isolated, independent database (PDB), but the service provider benefits from resource sharing and centralized management.


Oracle Database 19c Multitenant Architecture

 

Oracle Database 19c introduces several key features designed to enhance performance, scalability, and management. Among these, Oracle’s multitenant architecture stands out as a game-changing innovation, offering organizations the ability to consolidate databases while ensuring high performance and simplified management. This blog post will explore how Oracle’s multitenant architecture in 19c can drive efficiency, reduce costs, and provide a scalable solution for modern database environments.

What is Oracle Multitenant Architecture?

Oracle Database’s multitenant architecture allows you to consolidate multiple databases (referred to as pluggable databases, or PDBs) into a single container database (CDB). This architecture offers distinct advantages over traditional database designs, including improved resource utilization, simplified management, and faster provisioning of databases.

In Oracle 19c, the multitenant architecture has been refined with several new enhancements, making it even more powerful for enterprises looking to streamline operations and cut costs. The key components of this architecture include:

  • Container Database (CDB): The central database that houses one or more pluggable databases. It handles the overall database management functions.
  • Pluggable Databases (PDBs): These are the individual databases that reside within the CDB. Each PDB is a self-contained, full-fledged database, but it shares common resources like memory and background processes with other PDBs within the same CDB.

Key Benefits of Oracle Multitenant Architecture in 19c

  1. Resource Efficiency and Consolidation

    • One of the primary advantages of Oracle’s multitenant architecture is the ability to consolidate multiple databases on a single server. By sharing memory and background processes between pluggable databases, organizations can maximize hardware resources and reduce operational overhead. This consolidation not only improves hardware utilization but also simplifies administrative tasks such as backups and patching.
  2. Simplified Database Management

    • Oracle 19c introduces enhanced management features for multitenant environments. Administrators can now manage multiple databases from a single control point (the CDB), making tasks like database provisioning, upgrades, and patching much easier. This reduces the complexity of managing large numbers of databases and helps in scaling systems more efficiently.
    • Features like automatic PDB relocation and Pluggable Database hot clone in 19c provide additional flexibility, enabling seamless database migrations and backup operations without significant downtime.
  3. High Availability and Fault Isolation

    • With multitenant architecture, each PDB operates independently within the CDB. This means that if one PDB experiences an issue, it won’t affect other PDBs in the same CDB. This fault isolation enhances the overall availability and reliability of the database environment.
    • Additionally, Oracle Database 19c supports Data Guard and Active Data Guard for multitenant environments, offering robust disaster recovery capabilities for container databases and pluggable databases.
  4. Cost Savings and Licensing Efficiency

    • Oracle’s multitenant architecture can lead to significant cost savings, especially in licensing. Traditional licensing for individual databases can be expensive, especially when you have a large number of databases to manage. With the multitenant model, Oracle provides a way to consolidate databases into fewer physical instances, thus reducing the total number of licenses required.
    • Oracle’s Oracle License Savings program further allows organizations to save on licensing costs when migrating to multitenant environments, as PDBs within a CDB share the same license count.
  5. Faster Database Provisioning

    • With multitenant architecture in Oracle 19c, the time required to provision a new database is significantly reduced. PDBs can be created quickly and easily from a template, allowing for rapid deployment of new applications and services. This speed of provisioning is especially valuable for development, testing, and staging environments, where new databases are frequently needed.
  6. Seamless Database Upgrades

    • Oracle Database 19c makes database upgrades easier in multitenant environments. The Pluggable Database upgrade feature allows organizations to upgrade PDBs independently of the CDB. This means that upgrading a database becomes a more modular and manageable process, reducing downtime and disruption.


Click Here for more details on Oracle Database 19c Multitenant Architecture

Expdp/Impdp FAQ's

1. What is the difference between expdp and impdp?

  • Answer:
    • expdp (Data Pump Export): This utility is used to export database objects (tables, schemas, or entire databases) into a dump file. The export process can run in parallel and can be fine-tuned for performance.
    • impdp (Data Pump Import): This utility is used to import data from dump files created by expdp into a database. Like expdp, impdp also supports parallel execution and advanced filtering options.
    • Key Difference: While expdp is used for exporting data from Oracle databases, impdp is used for importing data back into Oracle databases.

2. What are the advantages of using Data Pump (expdp/impdp) over traditional exp/imp?

  • Answer:
    • Performance: Data Pump utilities support parallel processing, which significantly speeds up both export and import operations.
    • Network Mode: Data Pump can export/import directly between databases over the network, bypassing the need for dump files.
    • Flexibility: More granular control over the export and import process (e.g., filtering tables, schemas, or partitions).
    • Incremental Exports: Data Pump supports incremental exports, allowing for only changes (new or modified data) to be exported since the last export.
    • Job Monitoring: Data Pump offers real-time monitoring and logging of operations.

3. Explain the concept of "PARALLEL" in Data Pump and how it improves performance.

  • Answer:
    • The PARALLEL parameter allows multiple worker processes to run in parallel during an export or import operation. By dividing the task among multiple processes, the overall time for data transfer is reduced.
    • expdp and impdp can perform operations faster, especially with large datasets or highly partitioned tables, by utilizing multiple CPUs or cores.
    • The PARALLEL value specifies the number of parallel workers that should be launched to handle the job. The higher the number, the more parallelism you achieve (subject to system resources).

4. What is the role of the DIRECTORY parameter in expdp/impdp?

  • Answer:
    • The DIRECTORY parameter specifies the directory object in the database where dump files will be written (for expdp) or read from (for impdp). This directory must be a valid directory object in Oracle, and it must be accessible to the Oracle Database Server. The directory path must be created and granted appropriate permissions to the Oracle user executing the job.
    • For example:

      CREATE DIRECTORY dump_dir AS '/path_to_directory'; GRANT READ, WRITE ON DIRECTORY dump_dir TO <username>;

5. How can you perform a schema-level export using expdp?

  • Answer: To export an entire schema, you can use the following command:

    expdp username/password DIRECTORY=dump_dir DUMPFILE=schema_export.dmp LOGFILE=schema_export.log SCHEMAS=schema_name
    • SCHEMAS: This specifies the schema(s) to export. You can list multiple schemas by separating them with commas.

6. What is the EXCLUDE parameter in Data Pump and how is it used?

  • Answer:
    • The EXCLUDE parameter is used to exclude certain objects from the export or import operation. For example, you can exclude tables, indexes, or constraints from the export.
    • It is useful when you need to exclude specific objects to reduce the dump file size or to avoid exporting unnecessary objects.
    • Example of excluding tables:

      expdp username/password DIRECTORY=dump_dir DUMPFILE=export.dmp LOGFILE=export.log EXCLUDE=TABLE:"IN ('table1', 'table2')"

7. What is the INCLUDE parameter in Data Pump and how is it used?

  • Answer:
    • The INCLUDE parameter allows you to include specific types of objects in an export or import job. This is the opposite of EXCLUDE.
    • You can use it to focus only on specific database objects, such as tables, schemas, or indexes.
    • Example of including only tables in the export:

      expdp username/password DIRECTORY=dump_dir DUMPFILE=export.dmp LOGFILE=export.log INCLUDE=TABLE:"='employees'"

8. What is the FLASHBACK_TIME parameter in expdp?

  • Answer:
    • The FLASHBACK_TIME parameter allows you to perform an export as it appeared at a specific point in time. This is useful for exporting consistent data from a database as it was during a certain time, even if the data is being modified during the export process.
    • For example:

      expdp username/password DIRECTORY=dump_dir DUMPFILE=export.dmp LOGFILE=export.log FLASHBACK_TIME="TO_TIMESTAMP('2024-11-01 12:00:00', 'YYYY-MM-DD HH24:MI:SS')"

9. How do you perform a transportable tablespace export/import with expdp/impdp?

  • Answer:
    • For transportable tablespaces, Data Pump can export and import entire tablespaces, reducing the time and complexity of moving large datasets between databases.
    • First, you need to set the TRANSPORTABLE parameter to ALWAYS or NEVER during the export:

      expdp username/password DIRECTORY=dump_dir DUMPFILE=export.dmp LOGFILE=export.log TRANSPORTABLE=ALWAYS TABLESPACES=ts_name
    • Then, use the impdp utility to import the tablespace into the target database:

      impdp username/password DIRECTORY=dump_dir DUMPFILE=export.dmp LOGFILE=import.log TRANSPORTABLE=ALWAYS

10. Explain the REMAP_SCHEMA parameter in impdp.

  • Answer:
    • The REMAP_SCHEMA parameter allows you to map the schema from the source database to a different schema in the target database during an import operation. This is useful when the schema name on the source and target databases are different.
    • For example:

      impdp username/password DIRECTORY=dump_dir DUMPFILE=export.dmp LOGFILE=import.log REMAP_SCHEMA=old_schema:new_schema

11. What is the purpose of the ACCESS_METHOD parameter in expdp/impdp?

  • Answer:
    • The ACCESS_METHOD parameter controls how the Data Pump job reads or writes the data. By default, Data Pump uses direct path when possible, but if the direct path is not available, it falls back to the conventional path.
    • The value for ACCESS_METHOD can be:
      • DIRECT_PATH: Uses direct path (faster) if possible.
      • CONVENTIONAL: Uses the conventional export/import method (slower).
    • Example:

      expdp username/password DIRECTORY=dump_dir DUMPFILE=export.dmp LOGFILE=export.log ACCESS_METHOD=DIRECT_PATH

12. How do you monitor the progress of an expdp or impdp job?

  • Answer:
    • You can monitor the progress of an expdp or impdp job by querying the DBA_DATAPUMP_JOBS and DBA_DATAPUMP_SESSIONS views.
    • Example:

      SELECT * FROM DBA_DATAPUMP_JOBS WHERE JOB_NAME='YOUR_JOB_NAME'; SELECT * FROM DBA_DATAPUMP_SESSIONS WHERE JOB_NAME='YOUR_JOB_NAME';
    • Additionally, the STATUS parameter in the LOGFILE will display progress during the job execution.

13. How can you restart a failed Data Pump job?

  • Answer:
    • If a Data Pump job fails, you can restart the job from where it left off using the RESTART parameter.
    • Example:

      expdp username/password DIRECTORY=dump_dir DUMPFILE=export.dmp LOGFILE=export.log RESTART=Y

28 Nov 2024

New features in Oracle Database 21c every DBA should know

 Oracle Database 21c, the latest release from Oracle, introduces a wide array of new features and enhancements aimed at improving performance, security, automation, and cloud integration. As an Oracle DBA, staying on top of these new capabilities is crucial for optimizing database environments and supporting business requirements. Here are some of the key features in Oracle Database 21c that every DBA should know:

1. AutoML (Automated Machine Learning)

  • What It Is: Oracle Database 21c introduces AutoML capabilities, allowing DBAs and developers to build machine learning models directly within the database without needing deep expertise in data science.
  • Why It Matters: Machine learning integration directly within the database simplifies data analysis, predictive modeling, and decision-making, all without moving data outside of the database.
  • Key Benefits:
    • Automated model training and evaluation.
    • Simplifies complex machine learning tasks like classification, regression, and clustering.
    • No need for separate data pipelines or tools.

2. Blockchain Tables

  • What It Is: Oracle 21c introduces the concept of blockchain tables, where each row in a table is cryptographically secured, enabling tamper-proof data storage. These tables are ideal for use cases that require immutability and auditability, like financial transactions or supply chain tracking.
  • Why It Matters: Blockchain tables can help ensure data integrity, enhance security, and provide an audit trail that guarantees the history of each transaction is immutable.
  • Key Benefits:
    • Ensures immutability of data, making it ideal for audit-heavy use cases.
    • Integrates blockchain technology natively in the database.
    • Simplifies regulatory compliance by maintaining an incorruptible record of data changes.

3. JSON Enhancements

  • What It Is: Oracle Database 21c brings enhanced support for JSON data types, with improved indexing, searching, and manipulation of JSON data. This includes support for "JSONPath," which allows for more advanced querying of JSON documents.
  • Why It Matters: JSON continues to grow in popularity as a format for semi-structured data, especially with modern web applications and RESTful APIs. The enhancements in 21c make working with JSON data more efficient and flexible.
  • Key Benefits:
    • Full support for JSONPath queries (similar to XPath in XML).
    • Improved performance for JSON-based operations.
    • New indexing options that make querying JSON data faster and more efficient.

4. In-Memory Enhancements

  • What It Is: Oracle Database 21c enhances the In-Memory column store feature, which allows for faster query performance by storing frequently accessed data in memory. The new enhancements allow for better control and more flexibility in how data is stored and retrieved in-memory.
  • Why It Matters: In-memory computing continues to be one of the most powerful tools for speeding up database workloads, especially for analytic queries and real-time processing.
  • Key Benefits:
    • Improved performance for analytics and real-time applications.
    • Enhanced control over memory usage and in-memory data management.
    • Support for Hybrid In-Memory where both OLTP and OLAP workloads can coexist in-memory.

5. Auto DDL (Data Definition Language) Management

  • What It Is: Auto DDL automatically detects changes to the underlying database schema, such as the addition of new tables or columns, and propagates these changes to dependent objects. This feature is particularly useful when managing large, complex environments with many schema objects.
  • Why It Matters: Reduces the need for manual intervention when making schema changes, simplifying database management and improving overall efficiency.
  • Key Benefits:
    • Automatically handles schema changes without manual DDL updates.
    • Reduces human error and administrative overhead.
    • Helps DBAs manage schema changes in large, complex systems.

6. Multicloud and Hybrid Cloud Support

  • What It Is: Oracle 21c enhances its integration with multicloud and hybrid cloud environments, providing more seamless cloud-to-cloud and on-premises-to-cloud management for databases. This includes improved data replication and migration tools.
  • Why It Matters: As businesses increasingly adopt multicloud strategies, the ability to seamlessly manage Oracle databases across multiple cloud providers is critical for maintaining flexibility and optimizing cost.
  • Key Benefits:
    • Easier management and migration between Oracle Cloud, on-premises, and other cloud providers.
    • Improved cross-cloud data replication for high availability.
    • Support for hybrid cloud architectures, making it easier to run Oracle databases across different environments.

7. Sharding Enhancements

  • What It Is: Sharding enables horizontal partitioning of large databases to improve scalability and performance. Oracle 21c introduces enhancements to the sharding capabilities, including the ability to automatically rebalance shards and improve query performance.
  • Why It Matters: Sharding is a critical feature for large-scale applications requiring high availability and scalability. These improvements make sharding easier to manage and more efficient.
  • Key Benefits:
    • Automated rebalancing of data across shards for improved performance and load balancing.
    • Enhanced support for multi-tenant applications.
    • Simplified sharding administration, making it easier to scale large Oracle databases.

8. Improved Database Security (Including Data Redaction and Advanced Auditing)

  • What It Is: Oracle Database 21c enhances security features with capabilities such as dynamic data redaction, which automatically masks sensitive data from unauthorized users, and improved auditing for more granular control over database activity.
  • Why It Matters: As data privacy regulations become stricter (e.g., GDPR), securing sensitive data and ensuring compliance is critical for DBAs.
  • Key Benefits:
    • Improved data redaction for masking sensitive data in real-time.
    • Enhanced auditing for better tracking of database activity.
    • Integration with Oracle Key Vault for more robust encryption management.

9. Automatic Indexing

  • What It Is: Oracle 21c introduces an automatic indexing feature that allows the database to automatically create, manage, and drop indexes based on workloads. This removes the manual effort involved in index management and improves database performance.
  • Why It Matters: Index management is often time-consuming, and poor indexing can severely affect performance. Automatic indexing allows DBAs to focus on other tasks while optimizing query performance.
  • Key Benefits:
    • Automatic creation and tuning of indexes based on query workloads.
    • Reduces the need for manual intervention in index management.
    • Improves query performance and reduces overhead for DBAs.

10. SQL Macros

  • What It Is: SQL Macros allow users to create reusable SQL code snippets that can be inserted into other SQL queries. These macros are similar to functions but can be used across different SQL statements.
  • Why It Matters: SQL Macros can greatly improve productivity by enabling code reuse, simplifying maintenance, and reducing errors in complex queries.
  • Key Benefits:
    • Encourages code reuse, reducing duplication and errors.
    • Simplifies complex queries by modularizing logic.
    • Improves performance and efficiency for DBAs and developers.

11. Hybrid Columnar Compression (HCC) for More Data Types

  • What It Is: Oracle 21c extends Hybrid Columnar Compression (HCC) to more data types, allowing greater compression and storage optimization for a wider range of data, particularly for archival data.
  • Why It Matters: With large datasets being common, efficient data compression is essential for reducing storage costs and improving performance in large-scale Oracle databases.
  • Key Benefits:
    • Supports compression for more data types (e.g., LOBs, JSON).
    • Reduces storage footprint and improves query performance.
    • Ideal for large-scale OLTP and OLAP environments.


Oracle Database 21c introduces several cutting-edge features designed to help DBAs optimize performance, improve security, enhance scalability, and leverage modern technologies such as machine learning and blockchain. Mastering these new features will help DBAs stay ahead in a rapidly evolving IT landscape, whether managing on-premise databases, hybrid cloud architectures, or fully cloud-based environments.

Top Certifications required for Oracle Apps DBAS

 As Oracle Apps DBAs continue to evolve their skills in a rapidly changing IT landscape, certifications remain a key differentiator for career advancement. For 2024, focusing on certifications that validate expertise in both Oracle Applications and Database technologies, particularly in the cloud era, can enhance your career prospects. Below are the top certifications for Oracle Apps DBAs in 2024:

1. Oracle Certified Professional (OCP) – Oracle Database Administration

  • Overview: Oracle’s OCP certifications are one of the most sought-after credentials for database professionals. This certification validates your ability to manage Oracle databases, including installation, configuration, backup and recovery, performance tuning, and security.
  • Why It’s Important: With Oracle’s focus on the cloud, OCP remains an essential foundational certification for DBAs who manage traditional on-premise Oracle databases and hybrid environments.
  • Key Topics:
    • Backup and Recovery with RMAN
    • Performance Tuning and Optimization
    • Security Management
    • Troubleshooting Oracle Database
  • Recommended For: DBAs who manage on-premise Oracle Databases (Oracle 19c/21c) in either traditional or cloud-based environments.

2. Oracle Cloud Infrastructure (OCI) Architect Associate

  • Overview: With Oracle’s cloud-first strategy, understanding how to deploy and manage Oracle E-Business Suite (EBS) and databases on Oracle Cloud Infrastructure (OCI) is essential. This certification demonstrates proficiency in designing, deploying, and managing infrastructure using OCI services.
  • Why It’s Important: As Oracle Apps DBAs increasingly migrate to the cloud, this certification allows DBAs to handle cloud workloads, infrastructure automation, and security on OCI.
  • Key Topics:
    • Core OCI services (Compute, Storage, Networking, and Identity & Access Management)
    • Cost Management and Pricing for OCI
    • Security and Backup in OCI
    • Disaster Recovery and High Availability on OCI
  • Recommended For: DBAs working with Oracle Cloud-based applications and databases, particularly Oracle EBS or Oracle Autonomous Database.

3. Oracle E-Business Suite R12 Certified Implementation Specialist

  • Overview: This certification validates expertise in implementing, managing, and troubleshooting Oracle E-Business Suite (EBS) applications. It covers a wide range of functional and technical topics, including database administration, setup, configuration, and performance tuning of EBS.
  • Why It’s Important: Oracle EBS continues to be a core suite of applications for large enterprises, and a DBA with expertise in both EBS and database management is highly sought after.
  • Key Topics:
    • EBS architecture and database management
    • Application tuning and troubleshooting
    • EBS migration and upgrade
    • EBS backup, recovery, and security
  • Recommended For: DBAs working with Oracle EBS, particularly those managing large EBS implementations or migrations.

4. Oracle Autonomous Database on Shared Infrastructure (OCA)

  • Overview: Oracle Autonomous Database is a cloud-based solution that leverages machine learning to automate key database tasks such as tuning, patching, and scaling. This certification demonstrates proficiency in deploying, configuring, and managing Autonomous Databases.
  • Why It’s Important: As Oracle moves toward cloud-first solutions, understanding the inner workings of Autonomous Database is critical for DBAs managing cloud applications and services.
  • Key Topics:
    • Autonomous Database deployment and configuration
    • Database patching and upgrades in Autonomous DB
    • Automated tuning, performance management, and scaling
    • Managing security and backups in Autonomous Database
  • Recommended For: DBAs working with Oracle Autonomous Database and Oracle Cloud services.

5. Oracle Database 19c/21c Certified Expert

  • Overview: The Oracle Database 19c and 21c certifications provide an in-depth focus on the newest database versions, including features such as high availability, security, and machine learning capabilities. These certifications are ideal for DBAs managing both on-premise and cloud databases.
  • Why It’s Important: Oracle Database 19c is the long-term release supported by Oracle, and 21c introduces cutting-edge features. Mastery of these versions is essential for DBAs handling enterprise-grade databases.
  • Key Topics:
    • Advanced performance tuning
    • Partitioning and clustering
    • Backup, recovery, and high availability
    • Security management and user access control
  • Recommended For: DBAs managing Oracle Database 19c/21c environments, especially those handling large-scale enterprise applications.

6. Oracle Applications DBA: R12.2 Database Administration

  • Overview: This certification focuses on the administration of Oracle E-Business Suite (EBS) R12.2. It targets DBAs who manage Oracle EBS databases and ensures they are well-versed in database performance, tuning, patching, and backup/recovery.
  • Why It’s Important: Oracle E-Business Suite R12.2 remains one of the most widely used enterprise resource planning (ERP) systems. DBAs with this certification can demonstrate their ability to support EBS in both on-premise and cloud environments.
  • Key Topics:
    • Installing and configuring Oracle EBS R12.2
    • Managing Oracle EBS R12.2 patching and upgrades
    • Performance tuning and troubleshooting
    • Managing database backups and recovery strategies
  • Recommended For: DBAs specializing in the support and management of Oracle E-Business Suite (EBS) R12.2 environments.

expdp fails with UDE-31623: operation generated ORACLE error 31623

 


Issue:

expdp fails with error 


UDE-31623: operation generated ORACLE error 31623

expdp \"/as sysdba\" tables=EMP directory=DATA_PUMP_DIR dumpfile=emp.dmp logfile=emp.log

 
Export: Release 19.0.0.0.0 - Production on Fri Nov 22 04:15:27 2024
Version 19.24.0.0.0
 
Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.
 
Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
 
UDE-31623: operation generated ORACLE error 31623
ORA-31623: a job is not attached to this session via the specified handle
ORA-06512: at "SYS.DBMS_DATAPUMP", line 4747
ORA-06512: at "SYS.KUPV$FT_INT", line 2144
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.KUPV$FT_INT", line 2081
ORA-06512: at "SYS.DBMS_DATAPUMP", line 2263
ORA-06512: at "SYS.DBMS_DATAPUMP", line 4496
ORA-06512: at "SYS.DBMS_DATAPUMP", line 6127
ORA-06512: at line 1
 
Cause

ORA-4031 errors in alert log
ORA-04031: unable to allocate 56 bytes of shared memory ("streams pool","unknown object","streams pool","fixed allocation callback")

Solution

Check SGA parameters and increase as per requirement

In our case, 

SQL> show parameter sga_target value is less than sga_max_size.
Increased sga_target to fix the issue
NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ sga_max_size big integer 12G sga_target big integer 10880M
SQL> alter system set sga_target=12G scope=both;

System altered.



How to create restore point in Oracle database 19c

In Oracle 19c, you can create a restore point to mark a specific point in time in the database to which you can later restore the database if needed. A restore point is typically used in conjunction with Flashback technology, which allows you to perform operations such as flashback database, flashback table, or flashback restore.

There are two types of restore points in Oracle 19c:

  1. Normal Restore Point: A regular restore point that allows you to restore to that point in time.
  2. Guaranteed Restore Point (GRP): This type of restore point guarantees that the database can be flashed back to this point, even if the archived logs required for recovery are no longer available.

Steps to Create a Restore Point in Oracle 19c

1. Creating a Normal Restore Point

To create a normal restore point, you can use the CREATE RESTORE POINT command.


CREATE RESTORE POINT restore_point_name;

For example:


CREATE RESTORE POINT my_restore_point;

This creates a restore point with the name my_restore_point. You can use this restore point later to perform flashback operations.

2. Creating a Guaranteed Restore Point (GRP)

A guaranteed restore point ensures that the database can be flashed back to this point even if the archive logs are lost. To create a GRP, use the GUARANTEE FLASHBACK DATABASE option.


CREATE RESTORE POINT restore_point_name GUARANTEE FLASHBACK DATABASE;

For example:


CREATE RESTORE POINT my_guaranteed_restore_point GUARANTEE FLASHBACK DATABASE;

This ensures that all necessary archived logs and undo data are retained to allow a flashback operation to this point.

3. Checking the List of Restore Points

To view a list of existing restore points, you can query the V$RESTORE_POINT view:


SELECT restore_point_name, guarantee_flashback, scn, time FROM v$restore_point;

4. Dropping a Restore Point

If you no longer need a restore point, you can drop it using the DROP RESTORE POINT command:


DROP RESTORE POINT restore_point_name;

For example:


DROP RESTORE POINT my_restore_point;

If the restore point is a Guaranteed Restore Point (GRP), Oracle will automatically remove the guarantee when you drop it.

5. Using a Restore Point for Flashback Operations

Once a restore point is created, you can use it to perform flashback operations such as FLASHBACK DATABASE or FLASHBACK TABLE.

For example, to flashback the entire database to a specific restore point:


FLASHBACK DATABASE TO RESTORE POINT my_restore_point;

Or, to flashback a specific table:


FLASHBACK TABLE my_table TO RESTORE POINT my_restore_point;

Note:

  • Flashback Retention: For guaranteed restore points, make sure that Flashback Retention is appropriately set. If the required undo data is flushed before it is retained, the flashback operation might fail.
  • Archivelog Mode: Ensure the database is in archivelog mode to take full advantage of flashback and restore point functionality.

22 Nov 2024

Oracle ASM Interview Questions for the experienced - Part 2

 Click Here for ASM Interview Questions - Part 1

10. What is ASM rebalance, and how does it work?

  • Expected AnswerASM rebalance is the process of redistributing data across the disks in a disk group when there is a change in the disk group (e.g., adding or dropping a disk). The rebalance operation ensures that data is evenly spread across the available disks to optimize performance and storage. It occurs automatically when disk group changes are made and can be monitored with the v$asm_operation view.

11. How does ASM handle disk failure?

  • Expected Answer: When a disk fails in a redundant disk group (using mirroring), Oracle ASM automatically rebalances the data to the remaining healthy disks. If you are using external redundancy, you may need to rely on external RAID for recovery. ASM detects disk failures via periodic disk checks and logs the failure, making it easy for administrators to take action, such as adding a replacement disk.

12. How do you migrate data from one ASM disk group to another?

  • Expected Answer: To migrate data from one ASM disk group to another, you can:
    1. Use the ALTER DISKGROUP command to move data:

      ALTER DISKGROUP <source_diskgroup> MOVE <file_name> TO <target_diskgroup>;
    2. Use DBMS_FILE_TRANSFER or other tools like RMAN for moving data files between disk groups.
    3. Alternatively, you can use Data Pump for migrating large datasets.

13. How would you recover from a disk failure in Oracle ASM?

  • Expected Answer: To recover from a disk failure in ASM:
    1. Identify the failed disk using V$ASM_DISK.
    2. Ensure that the disk group is still operational (in case of mirroring, data is still available on the other disks).
    3. Replace the failed disk physically.
    4. Add the new disk to the ASM disk group using ALTER DISKGROUP ADD DISK.
    5. Oracle ASM will automatically rebalance the data across the disks, ensuring data is mirrored correctly.

14. Explain the role of the ASM instance and the Oracle database instance in an ASM-enabled database.

  • Expected Answer: The ASM instance manages the physical storage (disk groups and disks) and provides the storage abstraction for the Oracle database. It operates independently from the Oracle database instance, which connects to the ASM instance for reading/writing data files, control files, and redo logs. The database instance communicates with the ASM instance via Oracle background processes (e.g., DBWRLGWR).

15. What is the difference between ASM and RAID?

  • Expected Answer: ASM is a software-based storage management solution that operates within the Oracle Database ecosystem. While it provides features similar to RAID (redundancy, striping, etc.), it is tightly integrated with Oracle databases and handles file management and storage distribution automatically. RAID, on the other hand, is a hardware or software-based technology used for disk redundancy and performance at the hardware level, but it lacks the database-level integration that ASM offers.

16. Can you configure Oracle RAC (Real Application Clusters) with ASM?

  • Expected Answer: Yes, Oracle RAC can be configured with ASM for shared storage across multiple nodes. In RAC, multiple database instances run on different nodes, and ASM provides shared disk storage, which ensures that all instances have access to the same database files stored in ASM disk groups. ASM simplifies the storage configuration for RAC by handling disk management in a cluster environment.

17. What are the ASM parameters you can modify to tune performance?

  • Expected Answer: Some key ASM parameters for tuning performance include:
    • ASM_DISK_REPAIR_TIME: Defines the time allowed for disk repairs.
    • ASM_POWER_LIMIT: Controls the amount of CPU resources ASM can use during rebalancing.
    • ASM_DISKGROUP_REPAIR_TIME: Specifies the time allowed for repairing the disk group in case of a failure.

18. How do you monitor ASM performance?

  • Expected Answer: You can monitor ASM performance using the following methods:
    • V$ASM views: Use views like V$ASM_DISKV$ASM_DISKGROUP, and V$ASM_OPERATION to track ASM performance and disk operations.
    • Oracle Enterprise Manager (OEM): OEM provides a graphical interface to monitor ASM performance, including disk group usage, rebalance status, and storage health.