官方 Mongo Docker Image 页面上提供的 YAML 在 CentOS 上不起作用

如何解决官方 Mongo Docker Image 页面上提供的 YAML 在 CentOS 上不起作用

给定的 yml(最初提供 :help \magic):

# Use root/example as user/password credentials
version: '3.1'

services:

  mongo:
    image: mongo
    restart: always
    environment:
      MONGO_INITDB_ROOT_USERNAME: root
      MONGO_INITDB_ROOT_PASSWORD: example

  mongo-express:
    image: mongo-express
    restart: always
    ports:
      - 8081:8081
    environment:
      ME_CONFIG_MONGODB_ADMINUSERNAME: root
      ME_CONFIG_MONGODB_ADMINPASSWORD: example

在我的 windows 和 macOS docker 环境中可以正常工作。但是当我在 CentOS docker 环境中运行相同的 yml 时,mongo express 无法连接到 mongodb 服务。因此 mongo express 不会显示在浏览器中的 localhost:8081。我该如何解决这个问题?

编辑:

我想在将“mongo”映射到相应的容器 IP 地址时,DNS 似乎存在问题。我无法解释原因,因为这在 Windows 上不是问题。

这是mongo(mongodb)服务的日志:

{"t":{"$date":"2021-01-01T00:15:56.059+00:00"},"s":"I","c":"CONTROL","id":23285,"ctx":"main","msg":"Automatically disabling TLS 1.0,to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
{"t":{"$date":"2021-01-01T00:15:56.063+00:00"},"s":"W","c":"ASIO","id":22601,"msg":"No TransportLayer configured during NetworkInterface startup"}
{"t":{"$date":"2021-01-01T00:15:56.063+00:00"},"c":"NETWORK","id":4648601,"msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required,set tcpFastOpenServer,tcpFastOpenClient,and tcpFastOpenQueueSize."}
killing process with pid: 29
{"t":{"$date":"2021-01-01T00:15:56.063+00:00"},"id":23377,"ctx":"SignalHandler","msg":"Received signal","attr":{"signal":15,"error":"Terminated"}}
{"t":{"$date":"2021-01-01T00:15:56.063+00:00"},"id":23378,"msg":"Signal was sent by kill(2)","attr":{"pid":83,"uid":999}}
{"t":{"$date":"2021-01-01T00:15:56.063+00:00"},"id":23381,"msg":"will terminate after current cmd ends"}
{"t":{"$date":"2021-01-01T00:15:56.063+00:00"},"c":"REPL","id":4784900,"msg":"Stepping down the ReplicationCoordinator for shutdown","attr":{"waitTimeMillis":10000}}
{"t":{"$date":"2021-01-01T00:15:56.064+00:00"},"c":"COMMAND","id":4784901,"msg":"Shutting down the MirrorMaestro"}
{"t":{"$date":"2021-01-01T00:15:56.064+00:00"},"c":"SHARDING","id":4784902,"msg":"Shutting down the WaitForMajorityService"}
{"t":{"$date":"2021-01-01T00:15:56.064+00:00"},"id":4784903,"msg":"Shutting down the LogicalSessionCache"}
{"t":{"$date":"2021-01-01T00:15:56.064+00:00"},"id":20562,"msg":"Shutdown: going to close listening sockets"}
{"t":{"$date":"2021-01-01T00:15:56.064+00:00"},"id":23017,"ctx":"listener","msg":"removing socket file","attr":{"path":"/tmp/mongodb-27017.sock"}}
{"t":{"$date":"2021-01-01T00:15:56.064+00:00"},"id":4784905,"msg":"Shutting down the global connection pool"}
{"t":{"$date":"2021-01-01T00:15:56.064+00:00"},"c":"STORAGE","id":4784906,"msg":"Shutting down the FlowControlTicketholder"}
{"t":{"$date":"2021-01-01T00:15:56.064+00:00"},"c":"-","id":20520,"msg":"Stopping further Flow Control ticket acquisitions."}
{"t":{"$date":"2021-01-01T00:15:56.064+00:00"},"id":4784908,"msg":"Shutting down the PeriodicThreadToAbortExpiredTransactions"}
{"t":{"$date":"2021-01-01T00:15:56.064+00:00"},"id":4784934,"msg":"Shutting down the PeriodicThreadToDecreaseSnapshotHistoryCachePressure"}
{"t":{"$date":"2021-01-01T00:15:56.065+00:00"},"id":4784909,"msg":"Shutting down the ReplicationCoordinator"}
{"t":{"$date":"2021-01-01T00:15:56.065+00:00"},"id":4784910,"msg":"Shutting down the ShardingInitializationMongoD"}
{"t":{"$date":"2021-01-01T00:15:56.065+00:00"},"id":4784911,"msg":"Enqueuing the ReplicationStateTransitionLock for shutdown"}
{"t":{"$date":"2021-01-01T00:15:56.065+00:00"},"id":4784912,"msg":"Killing all operations for shutdown"}
{"t":{"$date":"2021-01-01T00:15:56.065+00:00"},"id":4695300,"msg":"Interrupted all currently running operations","attr":{"opsKilled":3}}
{"t":{"$date":"2021-01-01T00:15:56.065+00:00"},"id":4784913,"msg":"Shutting down all open transactions"}
{"t":{"$date":"2021-01-01T00:15:56.065+00:00"},"id":4784914,"msg":"Acquiring the ReplicationStateTransitionLock for shutdown"}
{"t":{"$date":"2021-01-01T00:15:56.065+00:00"},"c":"INDEX","id":4784915,"msg":"Shutting down the IndexBuildsCoordinator"}
{"t":{"$date":"2021-01-01T00:15:56.065+00:00"},"id":4784916,"msg":"Reacquiring the ReplicationStateTransitionLock for shutdown"}
{"t":{"$date":"2021-01-01T00:15:56.065+00:00"},"id":4784917,"msg":"Attempting to mark clean shutdown"}
{"t":{"$date":"2021-01-01T00:15:56.065+00:00"},"id":4784918,"msg":"Shutting down the ReplicaSetMonitor"}
{"t":{"$date":"2021-01-01T00:15:56.065+00:00"},"id":4784921,"msg":"Shutting down the MigrationUtilExecutor"}
{"t":{"$date":"2021-01-01T00:15:56.065+00:00"},"id":4784925,"msg":"Shutting down free monitoring"}
{"t":{"$date":"2021-01-01T00:15:56.065+00:00"},"id":20609,"c":"FTDC","id":4784926,"msg":"Shutting down full-time data capture"}
{"t":{"$date":"2021-01-01T00:15:56.065+00:00"},"id":20626,"msg":"Shutting down full-time diagnostic data capture"}
{"t":{"$date":"2021-01-01T00:15:56.067+00:00"},"id":4784927,"msg":"Shutting down the HealthLog"}
{"t":{"$date":"2021-01-01T00:15:56.067+00:00"},"id":4784929,"msg":"Acquiring the global lock for shutdown"}
{"t":{"$date":"2021-01-01T00:15:56.067+00:00"},"id":4784930,"msg":"Shutting down the storage engine"}
{"t":{"$date":"2021-01-01T00:15:56.067+00:00"},"id":20282,"msg":"Deregistering all the collections"}
{"t":{"$date":"2021-01-01T00:15:56.067+00:00"},"id":22261,"msg":"Timestamp monitor shutting down"}
{"t":{"$date":"2021-01-01T00:15:56.067+00:00"},"id":22317,"msg":"WiredTigerKVEngine shutting down"}
{"t":{"$date":"2021-01-01T00:15:56.067+00:00"},"id":22318,"msg":"Shutting down session sweeper thread"}
{"t":{"$date":"2021-01-01T00:15:56.067+00:00"},"id":22319,"msg":"Finished shutting down session sweeper thread"}
{"t":{"$date":"2021-01-01T00:15:56.067+00:00"},"id":22320,"msg":"Shutting down journal flusher thread"}
{"t":{"$date":"2021-01-01T00:15:56.068+00:00"},"id":22321,"msg":"Finished shutting down journal flusher thread"}
{"t":{"$date":"2021-01-01T00:15:56.068+00:00"},"id":22322,"msg":"Shutting down checkpoint thread"}
{"t":{"$date":"2021-01-01T00:15:56.068+00:00"},"id":22323,"msg":"Finished shutting down checkpoint thread"}
{"t":{"$date":"2021-01-01T00:15:56.068+00:00"},"id":4795902,"msg":"Closing WiredTiger","attr":{"closeConfig":"leak_memory=true,"}}
{"t":{"$date":"2021-01-01T00:15:56.074+00:00"},"id":4795901,"msg":"WiredTiger closed","attr":{"durationMillis":6}}
{"t":{"$date":"2021-01-01T00:15:56.074+00:00"},"id":22279,"msg":"shutdown: removing fs lock..."}
{"t":{"$date":"2021-01-01T00:15:56.074+00:00"},"id":4784931,"msg":"Dropping the scope cache for shutdown"}
{"t":{"$date":"2021-01-01T00:15:56.074+00:00"},"id":20565,"msg":"Now exiting"}
{"t":{"$date":"2021-01-01T00:15:56.074+00:00"},"id":23138,"msg":"Shutting down","attr":{"exitCode":0}}

MongoDB init process complete; ready for start up.

{"t":{"$date":"2021-01-01T00:15:57.181+00:00"},to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
{"t":{"$date":"2021-01-01T00:15:57.185+00:00"},"msg":"No TransportLayer configured during NetworkInterface startup"}
{"t":{"$date":"2021-01-01T00:15:57.185+00:00"},and tcpFastOpenQueueSize."}
{"t":{"$date":"2021-01-01T00:15:57.186+00:00"},"id":4615611,"ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":1,"port":27017,"dbPath":"/data/db","architecture":"64-bit","host":"973e354c35f7"}}
{"t":{"$date":"2021-01-01T00:15:57.186+00:00"},"id":23403,"msg":"Build Info","attr":{"buildInfo":{"version":"4.4.2","gitVersion":"15e73dc5738d2278b688f8929aee605fe4279b0e","openSSLVersion":"OpenSSL 1.1.1  11 Sep 2018","modules":[],"allocator":"tcmalloc","environment":{"distmod":"ubuntu1804","distarch":"x86_64","target_arch":"x86_64"}}}}
{"t":{"$date":"2021-01-01T00:15:57.186+00:00"},"id":51765,"msg":"Operating System","attr":{"os":{"name":"Ubuntu","version":"18.04"}}}
{"t":{"$date":"2021-01-01T00:15:57.186+00:00"},"id":21951,"msg":"Options set by command line","attr":{"options":{"net":{"bindIp":"*"},"security":{"authorization":"enabled"}}}}
{"t":{"$date":"2021-01-01T00:15:57.189+00:00"},"id":22270,"msg":"Storage engine to use detected by data files","attr":{"dbpath":"/data/db","storageEngine":"wiredTiger"}}
{"t":{"$date":"2021-01-01T00:15:57.189+00:00"},"id":22297,"msg":"Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem","tags":["startupWarnings"]}
{"t":{"$date":"2021-01-01T00:15:57.189+00:00"},"id":22315,"msg":"Opening WiredTiger","attr":{"config":"create,cache_size=6656M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],"}}
{"t":{"$date":"2021-01-01T00:15:57.861+00:00"},"id":22430,"msg":"WiredTiger message","attr":{"message":"[1609460157:861019][1:0x7f184105dac0],txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 1 through 2"}}
{"t":{"$date":"2021-01-01T00:15:57.969+00:00"},"attr":{"message":"[1609460157:969647][1:0x7f184105dac0],txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 2 through 2"}}
{"t":{"$date":"2021-01-01T00:15:58.064+00:00"},"attr":{"message":"[1609460158:64534][1:0x7f184105dac0],txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Main recovery loop: starting at 1/29568 to 2/256"}}
{"t":{"$date":"2021-01-01T00:15:58.175+00:00"},"attr":{"message":"[1609460158:175353][1:0x7f184105dac0],txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 1 through 2"}}
{"t":{"$date":"2021-01-01T00:15:58.249+00:00"},"attr":{"message":"[1609460158:249021][1:0x7f184105dac0],txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 2 through 2"}}
{"t":{"$date":"2021-01-01T00:15:58.301+00:00"},"attr":{"message":"[1609460158:301066][1:0x7f184105dac0],txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global recovery timestamp: (0,0)"}}
{"t":{"$date":"2021-01-01T00:15:58.301+00:00"},"attr":{"message":"[1609460158:301126][1:0x7f184105dac0],txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global oldest timestamp: (0,0)"}}
{"t":{"$date":"2021-01-01T00:15:58.317+00:00"},"id":4795906,"msg":"WiredTiger opened","attr":{"durationMillis":1128}}
{"t":{"$date":"2021-01-01T00:15:58.317+00:00"},"c":"RECOVERY","id":23987,"msg":"WiredTiger recoveryTimestamp","attr":{"recoveryTimestamp":{"$timestamp":{"t":0,"i":0}}}}
{"t":{"$date":"2021-01-01T00:15:58.318+00:00"},"id":4366408,"msg":"No table logging settings modifications are required for existing WiredTiger tables","attr":{"loggingEnabled":true}}
{"t":{"$date":"2021-01-01T00:15:58.320+00:00"},"id":22262,"msg":"Timestamp monitor starting"}
{"t":{"$date":"2021-01-01T00:15:58.321+00:00"},"id":22161,"msg":"You are running in OpenVZ which can cause issues on versions of RHEL older than RHEL6","tags":["startupWarnings"]}
{"t":{"$date":"2021-01-01T00:15:58.324+00:00"},"id":20536,"msg":"Flow Control is enabled on this deployment"}
{"t":{"$date":"2021-01-01T00:15:58.325+00:00"},"id":20625,"msg":"Initializing full-time diagnostic data capture","attr":{"dataDirectory":"/data/db/diagnostic.data"}}
{"t":{"$date":"2021-01-01T00:15:58.334+00:00"},"id":23015,"msg":"Listening on","attr":{"address":"/tmp/mongodb-27017.sock"}}
{"t":{"$date":"2021-01-01T00:15:58.334+00:00"},"attr":{"address":"0.0.0.0"}}
{"t":{"$date":"2021-01-01T00:15:58.334+00:00"},"id":23016,"msg":"Waiting for connections","attr":{"port":27017,"ssl":"off"}}

这是mongo-express服务的日志:

Mongo Express server listening at http://0.0.0.0:8081
Server is open to allow connections from anyone (0.0.0.0)
basicAuth credentials are "admin:pass",it is recommended you change this in your config.js!

/node_modules/mongodb/lib/server.js:265
        process.nextTick(function() { throw err; })
                                      ^
Error [MongoError]: failed to connect to server [mongo:27017] on first connect
    at Pool.<anonymous> (/node_modules/mongodb-core/lib/topologies/server.js:326:35)
    at Pool.emit (events.js:314:20)
    at Connection.<anonymous> (/node_modules/mongodb-core/lib/connection/pool.js:270:12)
    at Object.onceWrapper (events.js:421:26)
    at Connection.emit (events.js:314:20)
    at Socket.<anonymous> (/node_modules/mongodb-core/lib/connection/connection.js:175:49)
    at Object.onceWrapper (events.js:421:26)
    at Socket.emit (events.js:314:20)
    at emitErrorNT (internal/streams/destroy.js:92:8)
    at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
Waiting for mongo:27017...
/docker-entrypoint.sh: line 14: mongo: Try again
/docker-entrypoint.sh: line 14: /dev/tcp/mongo/27017: Invalid argument
Fri Jan  1 00:17:21 UTC 2021 retrying to connect to mongo:27017 (2/5)
/docker-entrypoint.sh: line 14: mongo: Try again
/docker-entrypoint.sh: line 14: /dev/tcp/mongo/27017: Invalid argument
Fri Jan  1 00:17:27 UTC 2021 retrying to connect to mongo:27017 (3/5)
/docker-entrypoint.sh: line 14: mongo: Try again
/docker-entrypoint.sh: line 14: /dev/tcp/mongo/27017: Invalid argument
Fri Jan  1 00:17:33 UTC 2021 retrying to connect to mongo:27017 (4/5)
/docker-entrypoint.sh: line 14: mongo: Try again
/docker-entrypoint.sh: line 14: /dev/tcp/mongo/27017: Invalid argument
Fri Jan  1 00:17:39 UTC 2021 retrying to connect to mongo:27017 (5/5)
/docker-entrypoint.sh: line 14: mongo: Try again
/docker-entrypoint.sh: line 14: /dev/tcp/mongo/27017: Invalid argument

解决方法

试试这个:

# Use root/example as user/password credentials
version: '3.1'

services:

  mongo:
    image: mongo
    container_name: mongo
    restart: always
    environment:
      MONGO_INITDB_ROOT_USERNAME: root
      MONGO_INITDB_ROOT_PASSWORD: example
    ports:
      - 27017:27017

  mongo-express:
    image: mongo-express
    restart: always
    ports:
      - 8081:8081
    environment:
      ME_CONFIG_MONGODB_ADMINUSERNAME: root
      ME_CONFIG_MONGODB_ADMINPASSWORD: example
    links:
      - "mongo"
    depends_on:
      - "mongo"

,

如果你愿意,请试试这个(工作示例):

version: '3'

services:
  mongo:
    image: mongo
    restart: always
    container_name: my_mongo
    ports:
    - "27017:27017"
    environment:
      MONGO_INITDB_ROOT_USERNAME: root
      MONGO_INITDB_ROOT_PASSWORD: example
    volumes:
    - ./data/db:/data/db
    - ./data/configdb:/data/configdb

  mongo-express:
    image: mongo-express
    restart: always
    container_name: my_mongo_express
    ports:
    - "8081:8081"
    environment:
      ME_CONFIG_MONGODB_ADMINUSERNAME: root
      ME_CONFIG_MONGODB_ADMINPASSWORD: example
      ME_CONFIG_BASICAUTH_USERNAME: user
      ME_CONFIG_BASICAUTH_PASSWORD: example
    depends_on:
      - mongo
    
volumes:
  node_modules:

如果您认为那里有问题,您也可以尝试使用不同的名称或端口公开 Mongo 服务。在这种情况下,您还必须相应地更改 MongoExpress 服务中的相应变量(ME_CONFIG_MONGODB_SERVERME_CONFIG_MONGODB_PORT) - 有关可能配置的详细信息 here

无论哪种方式,是否正确创建了涉及这两个服务的网络?

,

通常 mongo-express 的重试次数是 5,也许数据库在 docker-centos-env 上的连接时间比 windows/macos 多。如果你这样做,你可以检查数据库是否工作{{1 }} ,那么你可以做一些像 docker exec -it nameofmongocontainer mongo -uroot -pexample

在 mongo-express 设法与数据库建立连接之前,你提到了它在我的 docker windows env 上发生的 DNS 问题 3 次,因此它不是 centos docker env 独有的 ..

潜在修复:您可以将脚本添加到 docker-compose 文件中,该文件可以将重试次数增加到 20 .. 这是来自 here 的示例(将 docker-entrypoint.sh 脚本保存在与你的 docker-compose 文件 )

show dbs;

docker-compose 文件将是

#!/bin/bash
set -eo pipefail
# This is a very small modification of the original docker script taken from
# https://github.com/mongo-express/mongo-express-docker/blob/master/docker-entrypoint.sh
# with only change being the modified value of max_tries=20 instead of original 5,# to give the main mongodb container enough time to start and be connectable to.
# Also,see https://docs.docker.com/compose/startup-order/

# if command does not start with mongo-express,run the command instead of the entrypoint
if [ "${1}" != "mongo-express" ]; then
    exec "$@"
fi

function wait_tcp_port {
    local host="$1" port="$2"
    local max_tries=20 tries=1

    # see http://tldp.org/LDP/abs/html/devref1.html for description of this syntax.
    while ! exec 6<>/dev/tcp/$host/$port && [[ $tries -lt $max_tries ]]; do
        sleep 10s
        tries=$(( tries + 1 ))
        echo "$(date) retrying to connect to $host:$port ($tries/$max_tries)"
    done
    exec 6>&-
}

# if ME_CONFIG_MONGODB_SERVER has a comma in it,we're pointing to a replica set (https://github.com/mongo-express/mongo-express-docker/issues/21)
if [[ "$ME_CONFIG_MONGODB_SERVER" != *,* ]]; then
    # wait for the mongo server to be available
    echo Waiting for ${ME_CONFIG_MONGODB_SERVER}:${ME_CONFIG_MONGODB_PORT:-27017}...
    wait_tcp_port "${ME_CONFIG_MONGODB_SERVER}" "${ME_CONFIG_MONGODB_PORT:-27017}"
fi

# run mongo-express
exec node app

也许是 Centos 的问题? 您可以做的是分别执行这些步骤

  1. version: '3.1' services: mongo: image: mongo restart: always environment: MONGO_INITDB_ROOT_USERNAME: root MONGO_INITDB_ROOT_PASSWORD: example mongo-express: image: mongo-express restart: always ports: - 8081:8081 environment: ME_CONFIG_MONGODB_ADMINUSERNAME: root ME_CONFIG_MONGODB_ADMINPASSWORD: example volumes: - ./docker-entrypoint.sh:/docker-entrypoint.sh
  2. docker network create mongo-test
  3. docker run -d --name mongo -e MONGO_INITDB_ROOT_USERNAME=root -e MONGO_INITDB_ROOT_PASSWORD=example --net mongo-test mongo

潜在问题 centos 不允许任何容器使用“-p”尝试运行测试 docker run -d --name mongo-express -e ME_CONFIG_MONGODB_ADMINUSERNAME=root -e ME_CONFIG_MONGODB_ADMINPASSWORD=example --net mongo-test -p 8081:8081 mongo-express 然后检查您的 docker run --rm --name test --net mongo-test -p 8081:80 httpd

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


依赖报错 idea导入项目后依赖报错,解决方案:https://blog.csdn.net/weixin_42420249/article/details/81191861 依赖版本报错:更换其他版本 无法下载依赖可参考:https://blog.csdn.net/weixin_42628809/a
错误1:代码生成器依赖和mybatis依赖冲突 启动项目时报错如下 2021-12-03 13:33:33.927 ERROR 7228 [ main] o.s.b.d.LoggingFailureAnalysisReporter : *************************** APPL
错误1:gradle项目控制台输出为乱码 # 解决方案:https://blog.csdn.net/weixin_43501566/article/details/112482302 # 在gradle-wrapper.properties 添加以下内容 org.gradle.jvmargs=-Df
错误还原:在查询的过程中,传入的workType为0时,该条件不起作用 &lt;select id=&quot;xxx&quot;&gt; SELECT di.id, di.name, di.work_type, di.updated... &lt;where&gt; &lt;if test=&qu
报错如下,gcc版本太低 ^ server.c:5346:31: 错误:‘struct redisServer’没有名为‘server_cpulist’的成员 redisSetCpuAffinity(server.server_cpulist); ^ server.c: 在函数‘hasActiveC
解决方案1 1、改项目中.idea/workspace.xml配置文件,增加dynamic.classpath参数 2、搜索PropertiesComponent,添加如下 &lt;property name=&quot;dynamic.classpath&quot; value=&quot;tru
删除根组件app.vue中的默认代码后报错:Module Error (from ./node_modules/eslint-loader/index.js): 解决方案:关闭ESlint代码检测,在项目根目录创建vue.config.js,在文件中添加 module.exports = { lin
查看spark默认的python版本 [root@master day27]# pyspark /home/software/spark-2.3.4-bin-hadoop2.7/conf/spark-env.sh: line 2: /usr/local/hadoop/bin/hadoop: No s
使用本地python环境可以成功执行 import pandas as pd import matplotlib.pyplot as plt # 设置字体 plt.rcParams[&#39;font.sans-serif&#39;] = [&#39;SimHei&#39;] # 能正确显示负号 p
错误1:Request method ‘DELETE‘ not supported 错误还原:controller层有一个接口,访问该接口时报错:Request method ‘DELETE‘ not supported 错误原因:没有接收到前端传入的参数,修改为如下 参考 错误2:cannot r
错误1:启动docker镜像时报错:Error response from daemon: driver failed programming external connectivity on endpoint quirky_allen 解决方法:重启docker -&gt; systemctl r
错误1:private field ‘xxx‘ is never assigned 按Altʾnter快捷键,选择第2项 参考:https://blog.csdn.net/shi_hong_fei_hei/article/details/88814070 错误2:启动时报错,不能找到主启动类 #
报错如下,通过源不能下载,最后警告pip需升级版本 Requirement already satisfied: pip in c:\users\ychen\appdata\local\programs\python\python310\lib\site-packages (22.0.4) Coll
错误1:maven打包报错 错误还原:使用maven打包项目时报错如下 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0:resources (default-resources)
错误1:服务调用时报错 服务消费者模块assess通过openFeign调用服务提供者模块hires 如下为服务提供者模块hires的控制层接口 @RestController @RequestMapping(&quot;/hires&quot;) public class FeignControl
错误1:运行项目后报如下错误 解决方案 报错2:Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project sb 解决方案:在pom.
参考 错误原因 过滤器或拦截器在生效时,redisTemplate还没有注入 解决方案:在注入容器时就生效 @Component //项目运行时就注入Spring容器 public class RedisBean { @Resource private RedisTemplate&lt;String
使用vite构建项目报错 C:\Users\ychen\work&gt;npm init @vitejs/app @vitejs/create-app is deprecated, use npm init vite instead C:\Users\ychen\AppData\Local\npm-