跳到主要内容
版本: 最新版本-3.5

CREATE RESOURCE

创建资源。可以创建以下类型的资源:Apache Spark™、Apache Hive™、Apache Iceberg、Apache Hudi 和 JDBC。 Spark 资源用于 Spark Load 以管理加载信息,例如 YARN 配置、中间数据的存储路径和 Broker 配置。 Hive、Iceberg、Hudi 和 JDBC 资源用于管理查询外部表中涉及的数据源访问信息。

提示
  • 只有具有 SYSTEM 级别 CREATE RESOURCE 权限的用户才能执行此操作。
  • 您只能在 StarRocks v2.3 及更高版本中创建 JDBC 资源。

语法

CREATE [EXTERNAL] RESOURCE "resource_name"
PROPERTIES ("key"="value", ...)

参数

  • resource_name: 要创建的资源的名称。有关命名约定,请参见系统限制

  • PROPERTIES:指定资源类型的属性。 PROPERTIES 因资源类型而异。 详见示例。

示例

  1. 在 yarn Cluster 模式下创建一个名为 spark0 的 Spark 资源。

    CREATE EXTERNAL RESOURCE "spark0"
    PROPERTIES
    (
    "type" = "spark",
    "spark.master" = "yarn",
    "spark.submit.deployMode" = "cluster",
    "spark.jars" = "xxx.jar,yyy.jar",
    "spark.files" = "/tmp/aaa,/tmp/bbb",
    "spark.executor.memory" = "1g",
    "spark.yarn.queue" = "queue0",
    "spark.hadoop.yarn.resourcemanager.address" = "127.0.0.1:9999",
    "spark.hadoop.fs.defaultFS" = "hdfs://127.0.0.1:10000",
    "working_dir" = "hdfs://127.0.0.1:10000/tmp/starrocks",
    "broker" = "broker0",
    "broker.username" = "user0",
    "broker.password" = "password0"
    );

    与 Spark 相关的参数如下

    1. spark.master: required. Currently, yarn and spark ://host:port. are supported. 
    2. spark.submit.deployMode: deployment mode of the Spark program is required. Support cluster and client.
    3. spark.hadoop.yarn.resourcemanager.address: required when master is yarn.
    4. spark.hadoop.fs.defaultFS: required when master is yarn.
    5. Other parameters are optional. Please refer to https://spark.apache.ac.cn/docs/latest/configuration.html

    如果 Spark 用于 ETL,则需要指定 working_DIR 和 broker。 说明如下

    working_dir: Directory used by ETL. It is required when spark is used as ETL resource. For example: hdfs://host:port/tmp/starrocks.
    broker: Name of broker. It is required when spark is used as ETL resource and needs to be configured beforehand by using `ALTER SYSTEM ADD BROKER` command.
    broker.property_key: It is the property information needed to be specified when broker reads the intermediate files created by ETL.
  2. 创建一个名为 hive0 的 Hive 资源。

    CREATE EXTERNAL RESOURCE "hive0"
    PROPERTIES
    (
    "type" = "hive",
    "hive.metastore.uris" = "thrift://10.10.44.98:9083"
    );