Tensorflow從源碼編譯

TensorFlow Install More Learn API Resources Community GitHub Install Build from source Build a TensorFlow pip package from source and install it on Ubuntu Linux and macOS. While the instructions might work for other systems, it is only tested and supported for Ubuntu and macOS. Note: We already provide well-tested, pre-built TensorFlow packages for Linux and macOS systems. Setup for Linux and macOS Install the following build tools to configure your development environment. Install Python and the TensorFlow package dependencies Ubuntu mac OS sudo apt install python-dev python-pip # or python3-dev python3-pip Install the TensorFlow pip package dependencies (if using a virtual environment, omit the --user argument): pip install -U --user pip six numpy wheel mock pip install -U --user keras_applications==1.0.6 --no-deps pip install -U --user keras_preprocessing==1.0.5 --no-deps The dependencies are listed in the setup.py file under REQUIRED_PACKAGES. Install Bazel Install Bazel, the build tool used to compile TensorFlow. Add the location of the Bazel executable to your PATH environment variable. Install GPU support (optional, Linux only) There is no GPU support for macOS. Read the GPU support guide to install the drivers and additional software required to run TensorFlow on a GPU. Note: It is easier to set up one of TensorFlow's GPU-enabled Docker images. Download the TensorFlow source code Use Git to clone the TensorFlow repository: git clone https://github.com/tensorflow/tensorflow.git cd tensorflow The repo defaults to the master development branch. You can also checkout a release branch to build: git checkout branch_name # r1.9, r1.10, etc. Configure the build Configure your system build by running the following at the root of your TensorFlow source tree: ./configure This script prompts you for the location of TensorFlow dependencies and asks for additional build configuration options (compiler flags, for example). The following shows a sample run of ./configure (your session may differ): View sample configuration session Configuration options For GPU support, specify the versions of CUDA and cuDNN. If your system has multiple versions of CUDA or cuDNN installed, explicitly set the version instead of relying on the default. ./configure creates symbolic links to your system's CUDA libraries—so if you update your CUDA library paths, this configuration step must be run again before building. For compilation optimization flags, the default (-march=native) optimizes the generated code for your machine's CPU type. However, if building TensorFlow for a different CPU type, consider a more specific optimization flag. See the GCC manual for examples. There are some preconfigured build configs available that can be added to the bazel build command, for example: --config=mkl —Support for the Intel® MKL-DNN. --config=monolithic —Configuration for a mostly static, monolithic build. Note: Starting with TensorFlow 1.6, binaries use AVX instructions which may not run on older CPUs. Run the tests (optional) To test your copy of the source tree, run the following test for versions r1.12 and before (this may take a while): bazel test -c opt -- //tensorflow/... -//tensorflow/compiler/... -//tensorflow/contrib/lite/... For versions after r1.12 (like master), run the following: bazel test -c opt -- //tensorflow/... -//tensorflow/compiler/... -//tensorflow/lite/... Key Point: If you're having build problems on the latest development branch, try a release branch that is known to work. Build the pip package Bazel build CPU-only Use bazel to make the TensorFlow package builder with CPU-only support: bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package GPU support To make the TensorFlow package builder with GPU support: bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package Bazel build options Building TensorFlow from source can use a lot of RAM. If your system is memory-constrained, limit Bazel's RAM usage with: --local_resources 2048,.5,1.0. The official TensorFlow packages are built with GCC 4 and use the older ABI. For GCC 5 and later, make your build compatible with the older ABI using: --cxxopt="-D_GLIBCXX_USE_CXX11_ABI=0". ABI compatibility ensures that custom ops built against the official TensorFlow package continue to work with the GCC 5 built package. Build the package The bazel build command creates an executable named build_pip_package—this is the program that builds the pip package. For example, the following builds a .whl package in the /tmp/tensorflow_pkg directory: ./bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg Although it is possible to build both CUDA and non-CUDA configurations under the same source tree, it's recommended to run bazel clean when switching between these two configurations in the same source tree. Install the package The filename of the generated .whl file depends on the TensorFlow version and your platform. Use pip install to install the package, for example: pip install /tmp/tensorflow_pkg/tensorflow-version-tags.whl Success: TensorFlow is now installed. Docker Linux builds TensorFlow's Docker development images are an easy way to set up an environment to build Linux packages from source. These images already contain the source code and dependencies required to build TensorFlow. See the TensorFlow Docker guide for installation and the list of available image tags. CPU-only The following example uses the :nightly-devel image to build a CPU-only Python 2 package from the latest TensorFlow source code. See the Docker guide for available TensorFlow -devel tags. Download the latest development image and start a Docker container that we'll use to build the pip package: docker pull tensorflow/tensorflow:nightly-devel docker run -it -w /tensorflow -v $PWD:/mnt -e HOST_PERMS="$(id -u):$(id -g)" \ tensorflow/tensorflow:nightly-devel bash git pull # within the container, download the latest source code The above docker run command starts a shell in the /tensorflow directory—the root of the source tree. It mounts the host's current directory in the container's /mnt directory, and passes the host user's information to the container through an environmental variable (used to set permissions—Docker can make this tricky). Alternatively, to build a host copy of TensorFlow within a container, mount the host source tree at the container's /tensorflow directory: docker run -it -w /tensorflow -v /path/to/tensorflow:/tensorflow -v $PWD:/mnt \ -e HOST_PERMS="$(id -u):$(id -g)" tensorflow/tensorflow:nightly-devel bash With the source tree set up, build the TensorFlow package within the container's virtual environment: Configure the build—this prompts the user to answer build configuration questions. Build the tool used to create the pip package. Run the tool to create the pip package. Adjust the ownership permissions of the file for outside the container. ./configure # answer prompts or use defaults bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package ./bazel-bin/tensorflow/tools/pip_package/build_pip_package /mnt # create package chown $HOST_PERMS /mnt/tensorflow-version-tags.whl Install and verify the package within the container: pip uninstall tensorflow # remove current version pip install /mnt/tensorflow-version-tags.whl cd /tmp # don't import from source directory python -c "import tensorflow as tf; print(tf.__version__)" Success: TensorFlow is now installed. On your host machine, the TensorFlow pip package is in the current directory (with host user permissions): ./tensorflow-version-tags.whl GPU support Docker is the easiest way to build GPU support for TensorFlow since the host machine only requires the NVIDIA® driver (the NVIDIA® CUDA® Toolkit doesn't have to be installed). See the GPU support guide and the TensorFlow Docker guide to set up nvidia-docker (Linux only). The following example downloads the TensorFlow :nightly-devel-gpu-py3 image and uses nvidia-docker to run the GPU-enabled container. This development image is configured to build a Python 3 pip package with GPU support: docker pull tensorflow/tensorflow:nightly-devel-gpu-py3 docker run --runtime=nvidia -it -w /tensorflow -v $PWD:/mnt -e HOST_PERMS="$(id -u):$(id -g)" \ tensorflow/tensorflow:nightly-devel-gpu-py3 bash Then, within the container's virtual environment, build the TensorFlow package with GPU support: ./configure # answer prompts or use defaults bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package ./bazel-bin/tensorflow/tools/pip_package/build_pip_package /mnt # create package chown $HOST_PERMS /mnt/tensorflow-version-tags.whl Install and verify the package within the container and check for a GPU: pip uninstall tensorflow # remove current version pip install /mnt/tensorflow-version-tags.whl cd /tmp # don't import from source directory python -c "import tensorflow as tf; print(tf.contrib.eager.num_gpus())" Success: TensorFlow is now installed. Tested build configurations Linux Version Python version Compiler Build tools tensorflow-1.12.0 2.7, 3.3-3.6 GCC 4.8 Bazel 0.15.0 tensorflow-1.11.0 2.7, 3.3-3.6 GCC 4.8 Bazel 0.15.0 tensorflow-1.10.0 2.7, 3.3-3.6 GCC 4.8 Bazel 0.15.0 tensorflow-1.9.0 2.7, 3.3-3.6 GCC 4.8 Bazel 0.11.0 tensorflow-1.8.0 2.7, 3.3-3.6 GCC 4.8 Bazel 0.10.0 tensorflow-1.7.0 2.7, 3.3-3.6 GCC 4.8 Bazel 0.10.0 tensorflow-1.6.0 2.7, 3.3-3.6 GCC 4.8 Bazel 0.9.0 tensorflow-1.5.0 2.7, 3.3-3.6 GCC 4.8 Bazel 0.8.0 tensorflow-1.4.0 2.7, 3.3-3.6 GCC 4.8 Bazel 0.5.4 tensorflow-1.3.0 2.7, 3.3-3.6 GCC 4.8 Bazel 0.4.5 tensorflow-1.2.0 2.7, 3.3-3.6 GCC 4.8 Bazel 0.4.5 tensorflow-1.1.0 2.7, 3.3-3.6 GCC 4.8 Bazel 0.4.2 tensorflow-1.0.0 2.7, 3.3-3.6 GCC 4.8 Bazel 0.4.2 Version Python version Compiler Build tools cuDNN CUDA tensorflow_gpu-1.12.0 2.7, 3.3-3.6 GCC 4.8 Bazel 0.15.0 7 9 tensorflow_gpu-1.11.0 2.7, 3.3-3.6 GCC 4.8 Bazel 0.15.0 7 9 tensorflow_gpu-1.10.0 2.7, 3.3-3.6 GCC 4.8 Bazel 0.15.0 7 9 tensorflow_gpu-1.9.0 2.7, 3.3-3.6 GCC 4.8 Bazel 0.11.0 7 9 tensorflow_gpu-1.8.0 2.7, 3.3-3.6 GCC 4.8 Bazel 0.10.0 7 9 tensorflow_gpu-1.7.0 2.7, 3.3-3.6 GCC 4.8 Bazel 0.9.0 7 9 tensorflow_gpu-1.6.0 2.7, 3.3-3.6 GCC 4.8 Bazel 0.9.0 7 9 tensorflow_gpu-1.5.0 2.7, 3.3-3.6 GCC 4.8 Bazel 0.8.0 7 9 tensorflow_gpu-1.4.0 2.7, 3.3-3.6 GCC 4.8 Bazel 0.5.4 6 8 tensorflow_gpu-1.3.0 2.7, 3.3-3.6 GCC 4.8 Bazel 0.4.5 6 8 tensorflow_gpu-1.2.0 2.7, 3.3-3.6 GCC 4.8 Bazel 0.4.5 5.1 8 tensorflow_gpu-1.1.0 2.7, 3.3-3.6 GCC 4.8 Bazel 0.4.2 5.1 8 tensorflow_gpu-1.0.0 2.7, 3.3-3.6 GCC 4.8 Bazel 0.4.2 5.1 8 macOS CPU Version Python version Compiler Build tools tensorflow-1.12.0 2.7, 3.3-3.6 Clang from xcode Bazel 0.15.0 tensorflow-1.11.0 2.7, 3.3-3.6 Clang from xcode Bazel 0.15.0 tensorflow-1.10.0 2.7, 3.3-3.6 Clang from xcode Bazel 0.15.0 tensorflow-1.9.0 2.7, 3.3-3.6 Clang from xcode Bazel 0.11.0 tensorflow-1.8.0 2.7, 3.3-3.6 Clang from xcode Bazel 0.10.1 tensorflow-1.7.0 2.7, 3.3-3.6 Clang from xcode Bazel 0.10.1 tensorflow-1.6.0 2.7, 3.3-3.6 Clang from xcode Bazel 0.8.1 tensorflow-1.5.0 2.7, 3.3-3.6 Clang from xcode Bazel 0.8.1 tensorflow-1.4.0 2.7, 3.3-3.6 Clang from xcode Bazel 0.5.4 tensorflow-1.3.0 2.7, 3.3-3.6 Clang from xcode Bazel 0.4.5 tensorflow-1.2.0 2.7, 3.3-3.6 Clang from xcode Bazel 0.4.5 tensorflow-1.1.0 2.7, 3.3-3.6 Clang from xcode Bazel 0.4.2 tensorflow-1.0.0 2.7, 3.3-3.6 Clang from xcode Bazel 0.4.2 GPU Version Python version Compiler Build tools cuDNN CUDA tensorflow_gpu-1.1.0 2.7, 3.3-3.6 Clang from xcode Bazel 0.4.2 5.1 8 tensorflow_gpu-1.0.0 2.7, 3.3-3.6 Clang from xcode Bazel 0.4.2 5.1 8 Was this page helpful? Let us know how we did: Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 3.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Stay connected Blog GitHub Twitter YouTube Support Issue tracker Release notes Stack Overflow Terms Privacy

原文地址:https://www.cnblogs.com/2008nmj/p/10355527.html

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


MNIST数据集可以说是深度学习的入门,但是使用模型预测单张MNIST图片得到数字识别结果的文章不多,所以本人查找资料,把代码写下,希望可以帮到大家~1#BudingyourfirstimageclassificationmodelwithMNISTdataset2importtensorflowastf3importnumpyasnp4impor
1、新建tensorflow环境(1)打开anacondaprompt,输入命令行condacreate-ntensorflowpython=3.6注意:尽量不要更起名字,不然环境容易出错在选择是否安装时输入“y”(即为“yes”)。其中tensorflow为新建的虚拟环境名称,可以按喜好自由选择。python=3.6为指定python版本为3
这篇文章主要介绍“张量tensor是什么”,在日常操作中,相信很多人在张量tensor是什么问题上存在疑惑,小编查阅了各式资料,整理出简单好用的操作方法,希望对大...
tensorflow中model.fit()用法model.fit()方法用于执行训练过程model.fit(训练集的输入特征,训练集的标签,batch_size,#每一个batch的大小epochs,#迭代次数validation_data=(测试集的输入特征,
https://blog.csdn.net/To_be_little/article/details/124438800 目录1、查看GPU的数量2、设置GPU加速3、单GPU模拟多GPU环境1、查看GPU的数量importtensorflowastf#查看gpu和cpu的数量gpus=tf.config.experimental.list_physical_devices(device_type='GPU')cpus=tf.c
根据身高推测体重const$=require('jquery');consttf=require('@tensorflowfjs');consttfvis=require('@tensorflowfjs-vis');/*根据身高推测体重*///把数据处理成符合模型要求的格式functiongetData(){//学习数据constheights=[150,151,160,161,16
#!/usr/bin/envpython2#-*-coding:utf-8-*-"""CreatedonThuSep610:16:372018@author:myhaspl@email:myhaspl@myhaspl.com二分法求解一元多次方程"""importtensorflowastfdeff(x):y=pow(x,3)*3+pow(x,2)*2-19return
 继续上篇的pyspark集成后,我们再来看看当今热的不得了的tensorflow是如何继承进pycharm环境的参考:http://blog.csdn.net/include1224/article/details/53452824思路其实很简单,说下要点吧1.python必须要3.564位版本(上一篇直接装的是64位版本的Anaconda)2.激活3.5版本的
首先要下载python3.6:https://www.python.org/downloadselease/python-361/接着下载:numpy-1.13.0-cp36-none-win_amd64.whl 安装这两个:安装python3.6成功,接着安装numpy.接着安装tensorflow: 最后测试一下: python3.6+tensorflow安装完毕,高深的AI就等着你去
参考书《TensorFlow:实战Google深度学习框架》(第2版)以下TensorFlow程序完成了从图像片段截取,到图像大小调整再到图像翻转及色彩调整的整个图像预处理过程。#!/usr/bin/envpython#-*-coding:UTF-8-*-#coding=utf-8"""@author:LiTian@contact:694317828@qq.com
参考:TensorFlow在windows上安装与简单示例写在开头:刚开始安装的时候,由于自己的Python版本是3.7,安装了好几次都失败了,后来发现原来是tensorflow不支持3.7版本的python,所以后来换成了Python3.6,就成功了。。。。。anconda:5.3.2python版本:3.6.8tensorflow版本:1.12.0安装Anconda
实验介绍数据采用CriteoDisplayAds。这个数据一共11G,有13个integerfeatures,26个categoricalfeatures。Spark由于数据比较大,且只在一个txt文件,处理前用split-l400000train.txt对数据进行切分。连续型数据利用log进行变换,因为从实时训练的角度上来判断,一般的标准化方式,
 1)登录需要一个 invitationcode,申请完等邮件吧,大概要3-5个小时;2)界面3)配置数据集,在右边列设置 
模型文件的保存tensorflow将模型保持到本地会生成4个文件:meta文件:保存了网络的图结构,包含变量、op、集合等信息ckpt文件:二进制文件,保存了网络中所有权重、偏置等变量数值,分为两个文件,一个是.data-00000-of-00001文件,一个是.index文件checkpoint文件:文本文件,记录了最新保持
原文地址:https://blog.csdn.net/jesmine_gu/article/details/81093686这里只是做个收藏,防止原链接失效importosimportnumpyasnpfromPILimportImageimporttensorflowastfimportmatplotlib.pyplotaspltangry=[]label_angry=[]disgusted=[]label_d
 首先声明参考博客:https://blog.csdn.net/beyond_xnsx/article/details/79771690?tdsourcetag=s_pcqq_aiomsg实践过程主线参考这篇博客,相应地方进行了变通。接下来记载我的实践过程。  一、GPU版的TensorFlow的安装准备工作:笔者电脑是Windows10企业版操作系统,在这之前已
1.tensorflow安装  进入AnacondaPrompt(windows10下按windows键可找到)a.切换到创建好的tensorflow36环境下:activatetensorflow36    b.安装tensorflow:pipinstlltensorflow    c.测试环境是否安装好       看到已经打印出了"h
必须走如下步骤:sess=tf.Session()sess.run(result)sess.close()才能执行运算。Withtf.Session()assess:Sess.run()通过会话计算结果:withsess.as_default():print(result.eval())表示输出result的值生成一个权重矩阵:tf.Variable(tf.random_normal([2,3]
tf.zeros函数tf.zeros(shape,dtype=tf.float32,name=None)定义在:tensorflow/python/ops/array_ops.py.创建一个所有元素都设置为零的张量. 该操作返回一个带有形状shape的类型为dtype张量,并且所有元素都设为零.例如:tf.zeros([3,4],tf.int32)#[[0,0,
一、Tensorflow基本概念1、使用图(graphs)来表示计算任务,用于搭建神经网络的计算过程,但其只搭建网络,不计算2、在被称之为会话(Session)的上下文(context)中执行图3、使用张量(tensor)表示数据,用“阶”表示张量的维度。关于这一点需要展开一下       0阶张量称