Monitor the logs of a Depot¶
Whenever an Object Storage Depot is created, a pod is provisioned in the backend with three containers. One is the main container, named after the Depot itself. The other two are init containers, automatically generated with suffixes -dbi and -dbc appended to the Depotβs identifier.
-
dbistands for Database Independent Interface. It provides an abstraction layer to interact with underlying metadata databases in a standard way, ensuring flexibility across various database types that may be integrated with object storage for tracking object metadata. -
dbcrefers to Database Connectivity, responsible for handling the actual connection and communication setup with the metadata database during the pod initialization. It ensures the main container can successfully retrieve or store metadata related to stored objects.
Together, these init containers ensure the object storage environment is correctly configured and ready before the main container starts execution.
The below section involves the steps to observe the logs of a Depot on different endpoints, such as DataOS CLI, Metis UI, and Operations App.
DataOS CLI¶
To monitor the logs of a Depot using DataOS CLI, follow the steps below:
-
On DataOS CLI, execute the following command by replacing the placeholders with the actual values.
Example Usage:
dataos-ctl log -t depot -n thirdparty INFO[0000] π log(public)... INFO[0002] π log(public)...complete NODE NAME β CONTAINER NAME β ERROR βββββββββββββββββββΌβββββββββββββββββΌββββββββ thirdparty-ss-0 β thirdparty β # ^ main container -------------------LOGS------------------- 15-06-2025 02:35:33 [INFO] Configuring... 15-06-2025 02:35:33 [INFO] Configuring... 15-06-2025 02:35:33 [INFO] Starting Hive Metastore service. Command: /opt/hive-metastore/bin/start-metastore SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/apache-hive-metastore-3.1.3-bin/lib/log4j-slf4j-impl-2.17.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/hadoop-3.3.4/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] 2025-06-15 02:35:36: Starting Metastore Server SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/apache-hive-metastore-3.1.3-bin/lib/log4j-slf4j-impl-2.17.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/hadoop-3.3.4/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]This log confirms that the
thirdpartyDepot has:- Completed its configuration process.
- Successfully initiated the Hive Metastore service.
- Encountered minor SLF4J multiple binding warnings that do not affect critical functionality.
No errors are present in the output, and the startup appears to be smooth and complete.
Metis UI¶
To monitor the logs of a Depot on the Metis Catalog UI, follow the steps below:
-
Open the Metis Catalog.
-
Search for the Depot by name.
-
Click on the Depot that needs to be monitored and navigate to the βRuntimeβ section.
-
Click on the pod and navigate to the βPod Logsβ section. In the βPog Logsβ section, users can monitor the logs of the init and main containers.
Main container logs:
dbicontainer logs:
dbccontainer logs:
Operations App¶
To monitor the logs of a Depot on the Operations App, follow the steps below:
-
Open the Operations app.
-
Navigate to User Space β Resources β Depot and search for the Depot by name.
-
Click on the Depot that needs to be monitored and navigate to the βResource Runtimeβ section.
-
Click on the runtime node for which you want to monitor the logs, and navigate to the βRuntime Node Logsβ section.
These logs confirm that:
- The
dbiinit container performed the Hive schema setup tasks by executing SQL scripts to prepare the metastore. - The
dbcinit container initialized the Kyuubi server, loaded environment variables, and launched the process that connects query engines to the depot. - The main
thirdpartycontainer successfully started the Hive Metastore service and confirmed the system is ready to handle metadata operations.
This three-stage initialization confirms that the object storage Depot is fully operational and ready for use.
- The