jefby的小窝

  • 首页

  • 标签

  • 分类

  • 归档

  • 关于

Ubuntu16.04开发环境搭建

发表于 2018-11-25 | 分类于 Linux |

1. 安装cuda-driver

https://blog.csdn.net/qq_20492405/article/details/79034430

setting 选择软件和更新,附加驱动

2018-11-22 17-41-15屏幕截图.png | center | 654x468

2. 安装cuda-toolkit、cudnn等

2.1 cuda-toolkit

https://developer.nvidia.com/cuda-80-ga2-download-archive,使用deb方式安装,完成后给~/.bashrc增加如下内容:

1
2
3
export CUDA_HOME=/usr/local/cuda
export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH:$CUDA_HOME/extras/CUPTI/lib64
export PATH=$CUDA_HOME/bin:$PATH

编译cuda-example

1
2
3
4
cd /usr/local/cuda/samples
sudo make -j16
cd bin/x86_64/linux/release
./deviceQuery

如下是结果:

2018-11-22 17-45-56屏幕截图.png | center | 827x476

2.2 cudnn

https://developer.nvidia.com/cuda-80-ga2-download-archive
安装cuda8.0-cudnn-7.1.4 deb,默认会安装到/usr/local
检查结果

1
2
3
4
# 检查cudnn版本
cat /usr/include/cudnn.h | grep CUDNN_MAJOR -A 2
# 检查cuda-toolkit版本
nvcc --version

3. 必需软件安装

1
2
3
4
# terminal模拟器,相比gnome-terminal支持各种横向切分、纵向切分,使用方便,linux版本的"iTerm"
sudo apt install -y terminator
# Ubuntu默认不开启ssh端口,需要自行安装ssh-server
sudo apt install -y openssh-server

安装搜狗输入法后设置方法:

  • 设置=>文本输入 只保留fcitx

2018-11-22 22-57-30屏幕截图.png | center | 827x484

  • 在全局搜索框中查找fcitx配置,只保留搜狗输入和英语

2018-11-22 22-58-36屏幕截图.png | center | 579x516

4. 配置VNC远程访问

参考第一步 https://www.cnblogs.com/xuliangxing/p/7642650.html

5. Shadowsocks-Qt5

1
2
3
sudo add-apt-repository ppa:hzwhuang/ss-qt5
sudo apt-get update
sudo apt-get install shadowsocks-qt5

完成后使用Chrome,安装Proxy SwitchyOmega插件并配置

Arndale Octa配置OpenCL环境

发表于 2018-09-15 | 更新于 2018-11-25 | 分类于 Arndale Octa |

最近想要使用OpenCL来加速sift算法,刚好手头有开发板arndale octa,算是物尽其用吧。。。

Linaro14.04 for arndale octa内核默认没有mali驱动,我们需要修改内核并重新编译,如下是具体步骤

1、编译支持mali驱动的内核r4p0

1
2
git clone https://git.linaro.org/gwg/linaro-lsk.git 
git checkout lsk-v3.14-lt-mali-r4p0-beta2

如下是自动化脚本, build.sh

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# build on Arndale OCTA board
# Working linaro image for sd card can be found at: http://releases.linaro.org/14.04/ubuntu/arndale-octa

set -x
set -e
# Initial config
# 生成kernel配置
./scripts/kconfig/merge_config.sh linaro/configs/linaro-base.conf linaro/configs/distribution.conf linaro/configs/arndale_octa.conf linaro/configs/lt-arndale_octa.conf linaro/configs/mali-arndale-octa.conf

# build,编译内核
MINOR_VERSION=4
make zreladdr-y=0x20008000 LOCALVERSION= KERNELVERSION=3.14.0-${MINOR_VERSION}-linaro-arndale-octa -j4 zImage modules dtbs
sudo make LOCALVERSION= KERNELVERSION=3.14.0-${MINOR_VERSION}-linaro-arndale-octa modules_install

# Mount boot partition, prepare for installkernel
sudo mount /dev/mmcblk1p2 /media/boot
sudo rm -r /boot/*

# Install kernel
kernelversion=`cat ./include/config/kernel.release`
sudo installkernel $kernelversion ./arch/arm/boot/zImage ./System.map /boot

# Install device tree binary
sudo cp arch/arm/boot/dts/exynos5420-arndale-octa.dtb /media/boot/board.dtb

# Reboot
sudo sync
sudo umount /media/boot
#sudo reboot
echo "finished"

顺利的话,会输出”finished”,重启

1
reboot

查看是否生效,如果正常会类似如下:

iV34eJ.png

2. 下载user-space驱动

Arndale octa使用的是Exynos 5420,GPU为6核心的Mali-T628,官方下载链接如下:

https://developer.arm.com/products/software/mali-drivers/user-space

找到malit62xr4p002rel0linux1fbdevtar.gz,解压缩后拷贝到/usr/lib目录,直接下载地址:

https://developer.arm.com/-/media/Files/downloads/mali-drivers/user-space/archive/arndale-octa/malit62xr4p002rel0linux1fbdevtar.gz?revision=c1026f2b-1b1f-430a-be17-6e1949c79463

3. 客户端创建配置mali.icd

创建文件/etc/OpenCL/vendors/mali.icd
内容如下:

/usr/lib/libmali.so

4. 下载OpenCL-Mali-SDK测试

下载OpenCL-Mali-SDK-1.1,官方已经不再提供了,在github上有之前的备份:

1
2
3
4
5
6
git clone https://github.com/jefby/Mali_OpenCL_SDK
git checkout 1.1.0
sudo -s
cd samples/hello_world_opencl
make
./hello_world_opencl

结果如下:

iV8iSf.png

Tensorflow Lite C++动态库编译[Android]

发表于 2018-08-30 | 更新于 2018-11-25 | 分类于 ML |

在Android的jni中使用tflite c++ API做推理,以下是记录:

  • 进入tensorflow源码根目录,修改WORKSPACE增加如下内容:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
android_sdk_repository(
name = "androidsdk",
api_level = 27,
build_tools_version = "27.0.2",
path = "/Users/jefby/Library/Android/sdk",
)

# Android NDK r12b is recommended (higher may cause issues with Bazel)
android_ndk_repository(
name="androidndk",
# path="/Users/xxx/Library/Android/sdk/android-ndk-r16b",
path="/Users/xxx/Library/Android/sdk/ndk-bundle",
api_level=21
)
  • 在tensorflow/contrib/lite/BUILD中增加如下内容,用于生成libtensorflowLite.so
1
2
3
4
5
6
7
8
9
10
11
cc_binary(
name = "libtensorflowLite.so",
linkopts = ["-shared", "-Wl,-soname=libtensorflowLite.so"],
visibility = ["//visibility:public"],
linkshared = 1,
copts = tflite_copts(),
deps = [
":framework",
"//tensorflow/contrib/lite/kernels:builtin_ops",
],
)
  • 编译,根据APP_ABI可自行设置为armeabi-v7a或者arm64-v8a
1
2
3
4
5
6
# build for armeabi-v7a
bazel build -c opt //tensorflow/contrib/lite:libtensorflowLite.so --crosstool_top=//external:android/crosstool --cpu=armeabi-v7a --host_crosstool_top=@bazel_tools//tools/cpp:toolchain --cxxopt="-std=c++11" --verbose_failures


# build for arm64-v8a
bazel build -c opt //tensorflow/contrib/lite:libtensorflowLite.so --crosstool_top=//external:android/crosstool --cpu=arm64-v8a --host_crosstool_top=@bazel_tools//tools/cpp:toolchain --cxxopt="-std=c++11" --verbose_failures

发现提示如下错误:

jni_src/jni/src/utils/xxxxTFLite.cpp:41: error: undefined reference to ‘tflite::InterpreterBuilder::operator()(std::ndk1::unique_ptr<tflite::Interpreter, std::ndk1::default_deletetflite::Interpreter >*)’

原因是ndk-r16b有问题,使用android studio自带的r17 ndk编译

重新编译会生成文件 bazel-bin/tensorflow/contrib/lite/libtensorflowLite.so

这个就是我们需要的动态库,可以通过Android.mk、Application.mk集成到Android工程中使用

将mobilenet_ssd tensorflow.pb转换为tflite的详细步骤

发表于 2018-08-20 | 更新于 2018-11-25 | 分类于 ML |

1.依赖工具及环境

  • 下载tensorflow-models源码

    git clone https://github.com/tensorflow/models

  • 按照提示配置环境
    注意在~/.bashrc添加上

    1
    2
    # From tensorflow/models/research/
    export PYTHONPATH=$PYTHONPATH:xxxxxx/tensorflow-models/research:xxxx/tensorflow-models/research/slim
  • 下载tensorflow源码和android ndk r16b

    1
    2
    3
    https://github.com/tensorflow/tensorflow
    cd tensorflow
    git checkout r1.10

    设置编译android demo需要的ndk
    进入tensorflow源码根目录,修改WORKSPACE增加如下行

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    android_sdk_repository(
    name = "androidsdk",
    api_level = 27,
    build_tools_version = "27.0.2",
    path = "/Users/xxxx/Library/Android/sdk",
    )

    # Android NDK r12b is recommended (higher may cause issues with Bazel)
    android_ndk_repository(
    name="androidndk",
    path="/Users/xxxx/Library/Android/sdk/android-ndk-r16b",
    api_level=21
    )

2.生成tflite兼容的pb graph

2.1) 设置变量

1
2
3
4
ROOT_PATH=xxxxx/tensorflow/pretrained_models
export CONFIG_FILE=${ROOT_PATH}/pipeline.config
export CHECKPOINT_PATH=${ROOT_PATH}/model.ckpt
export OUTPUT_DIR=/tmp/tflite

2.2) 根据pb、checkpoint、pipeline.config等生成frozen graph

1
python object_detection/export_tflite_ssd_graph.py --pipeline_config_path $CONFIG_FILE  --trained_checkpoint_prefix $CHECKPOINT_PATH --output_directory /tmp/tflite/ --add_postprocessing_op=true

3.通过TOCO获取优化后的模型

TOCO: TensorFlow Lite Optimizing Converter

3.1)如果想要整型[这块暂时没调通]

1
2
3
4
5
6
7
8
9
10
11
bazel run --config=opt tensorflow/contrib/lite/toco:toco -- \
--input_file=$OUTPUT_DIR/tflite_graph.pb \
--output_file=$OUTPUT_DIR/detect.tflite \
--input_shapes=1,300,300,3 \
--input_arrays=normalized_input_image_tensor \
--output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3' \
--inference_type=QUANTIZED_UINT8 \
--mean_values=128 \
--std_values=128 \
--change_concat_input_ranges=false \
--allow_custom_ops

3.2)如果想要浮点类型

1
2
3
4
5
6
7
8
bazel run --config=opt tensorflow/contrib/lite/toco:toco -- \
--input_file=$OUTPUT_DIR/tflite_graph.pb \
--output_file=$OUTPUT_DIR/detect.tflite \
--input_shapes=1,300,300,3 \
--input_arrays=normalized_input_image_tensor \
--output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3' \
--inference_type=FLOAT \
--allow_custom_ops

4. 集成到Android Studio工程中

4.1)更新模型和配置文件

cp /tmp/tflite/detect.tflite tensorflow/contrib/lite/examples/android/app/src/main/assets

编辑tensorflow/contrib/lite/examples/android/BUILD,增加新的detect.tflite和color_pen_label.txt

1
2
3
4
5
6
7
8
9
10
@@ -37,9 +37,10 @@ android_binary(
"@tflite_conv_actions_frozen//:conv_actions_frozen.tflite",
"//tensorflow/contrib/lite/examples/android/app/src/main/assets:conv_actions_labels.txt",
"@tflite_mobilenet_ssd//:mobilenet_ssd.tflite",
- "@tflite_mobilenet_ssd_quant//:detect.tflite",
+ "//tensorflow/contrib/lite/examples/android/app/src/main/assets:detect.tflite",
"//tensorflow/contrib/lite/examples/android/app/src/main/assets:box_priors.txt",
"//tensorflow/contrib/lite/examples/android/app/src/main/assets:coco_labels_list.txt",
+ "//tensorflow/contrib/lite/examples/android/app/src/main/assets:color_pen_label.txt",
],

新建color_pen_label.txt内容为

1
2
???
color-pen

拷贝到demo/asset目录:

cp color_pen_label.txt tensorflow/contrib/lite/examples/android/app/src/main/assets

如果是float的话,按如下修改源码
tensorflow/contrib/lite/examples/android/app/src/main/java/org/tensorflow/demo/DetectorActivity.java

1
2
3
4
5
6
7
8
9
@@ -50,9 +50,9 @@ public class DetectorActivity extends CameraActivity implements OnImageAvailable

// Configuration values for the prepackaged SSD model.
private static final int TF_OD_API_INPUT_SIZE = 300;
- private static final boolean TF_OD_API_IS_QUANTIZED = true;
+ private static final boolean TF_OD_API_IS_QUANTIZED = false;
private static final String TF_OD_API_MODEL_FILE = "detect.tflite";
- private static final String TF_OD_API_LABELS_FILE = "file:///android_asset/coco_labels_list.txt";
+ private static final String TF_OD_API_LABELS_FILE = "file:///android_asset/color_pen_label.txt";

如果是量化模型的话,按如下修改源码

1
2
3
4
5
6
7
@@ -50,9 +50,9 @@ public class DetectorActivity extends CameraActivity implements OnImageAvailable

// Configuration values for the prepackaged SSD model.
private static final int TF_OD_API_INPUT_SIZE = 300;
private static final String TF_OD_API_MODEL_FILE = "detect.tflite";
- private static final String TF_OD_API_LABELS_FILE = "file:///android_asset/coco_labels_list.txt";
+ private static final String TF_OD_API_LABELS_FILE = "file:///android_asset/color_pen_label.txt";

4.2)编译tflite_demo app

1
2
3
4
bazel build --cxxopt=--std=c++11 //tensorflow/contrib/lite/examples/android:tflite_demo

# arm64版本
bazel build -c opt --config=android_arm64 --cxxopt='--std=c++11' //tensorflow/contrib/lite/examples/android:tflite_demo

4.3)安装到Android设备

1
adb install -r bazel-bin/tensorflow/contrib/lite/examples/android/tflite_demo.apk

4.4)运行TFL Detect App

AArch64硬件平台整理

发表于 2018-08-19 | 更新于 2018-11-25 | 分类于 aarch64 |

ARM发布ARM v8架构,主要针对高性能企业级市场,由于ARM系列芯片的高度可定制和低功耗特性,众多小伙伴看到了服务器市场的希望,纷纷跟上,积极推进ARM进军传统服务器领域,虽然目前跟Intel相比还是太弱,但我相信肯定会打出一片天地,ARM授权SoC IP核心,企业可以根据具体应用需求来定制芯片,另外可以加入特定的硬件逻辑,例如SSL安全验证等,这些都是Intel所不能比得,就是生态系统目前还未完全建立,需要各个厂商的通力协作。现整理部分AArch64硬件平台,以方便自己和需要的人做参考,有新的设备欢迎提醒哈!

1. versatile-express(FPGA soft-core)

ARM公司推出的示范平台,用于验证

2. Applied Micro Mustang:

8核ArmV8架构,最早的arm64硬件平台,目前已经可以安装Linaro Ubuntu、Fedora、OpenSUSE等系统,软件支持做的比较好,bootloader也替换为UEFI,而非嵌入式常用的uboot。
https://www.apm.com/products/data-center/x-gene-family/x-c1-development-kits/

3. ARM Juno

http://www.arm.com/zh/products/tools/development-boards/versatile-express/juno-arm-development-platform.php

4. Qemu

软件模拟硬件平台,http://www.bennee.com/~alex/blog/2014/05/09/running-linux-in-qemus-aarch64-system-emulation-mode/

5. AMD 64-bit ARM Opteron Developer Kit

AMD的ARM64服务器平台

6. Cavium Project ThunderX:(48核心) && EZchip

战斗机,https://www.youtube.com/watch?v=zmnjZUQPq5U
100个ARM 64核心,详细资料可查看网址http://www.ezchip.com/products/?ezchip=585&spage=686

7. 96Board

(1)HiKey : 8核心Cortex-A53,华为麒麟6220,
https://www.96boards.org/products/hikey/
(2)DragonBoard410c,4核心Cortex-A53
https://www.96boards.org/products/dragonboard/
(3)听说AMD也会推出一款面向企业级别的参考设计版,但是价格相对低廉些~

8. 华为平台

D-01和D-02,产品不错,技术领先
https://www.youtube.com/watch?v=dLHhnLLw4Fw

9. Nvidia 平台

面向机器识别、人工智能和无人机图形处理运算方面的
http://devblogs.nvidia.com/parallelforall/nvidia-jetson-tx1-supercomputer-on-module-drives-next-wave-of-autonomous-machines/

购买链接:
http://www.nvidia.com/object/jetson-tx1-dev-kit.html
最新版是TX2,有cuda加速,但CPU架构感觉有点老,怎么还是cortex-a57?:
https://developer.nvidia.com/embedded/buy/jetson-tx2

PC创建并使用aarch64虚拟机

发表于 2018-08-19 | 更新于 2018-11-25 | 分类于 aarch64 |

如果不想从头自己做虚拟机,可使用这个镜像
链接: https://pan.baidu.com/s/1aQikk7nZWlvXn2EZIn_M3w 密码: 2123
使用方法:
参考第4步,直接启动就可以,用户名和密码都是jefby


大部分人电脑都是x86_64,但有时候我们需要开发运行在arm64设备的程序,这时候arm64虚拟机就非常有用了,如下是详细步骤

1. qemu-system-arm

直接用apt安装 sudo apt install -y qemu-system-arm
或者是从源码安装

1
2
3
4
5
6
wget https://download.qemu.org/qemu-2.12.1.tar.bz2
tar -xjvf qemu-2.12.1.tar.bz2
cd qemu-2.12.1/
./configure –-target-list=aarch64-softmmu
make -j16
sudo make install

源码安装需要注意,创建/etc/qemu-ifup文件,内容如下:

1
2
#!/bin/sh 
/sbin/ifconfig $1 192.168.0.1

完成后增加执行权限
chmod +x /etc/qemu-ifup

2. 下载ubuntuarm64.iso && QEMU_EFI.fd

http://cdimage.ubuntu.com/releases/16.04/release/

创建40G大小的镜像,格式为qcow2,相比raw有个优势,比如同样创建40G的镜像,qcow2格式的size是真正使用的size而不是40G

1
qemu-img create -f qcow2  ubuntu16.04-arm64.qcow2 40G

Pfv3h8.md.png

下载QEMU_EFI.fd

1
wget http://releases.linaro.org/components/kernel/uefi-linaro/16.02/release/qemu64/QEMU_EFI.fd

3. 创建虚拟机ubuntu16.04-arm64.qcow2并安装ubuntu16.04.5系统

1
qemu-system-aarch64 -m 2048-cpu cortex-a57 -smp 2 -M virt -bios QEMU_EFI.fd -nographic -drive if=none,file=ubuntu-16.04.5-server-arm64.iso,id=cdrom,media=cdrom -device virtio-scsi-device -device scsi-cd,drive=cdrom -drive if=none,file=ubuntu16.04-arm64.qcow2,id=hd0 -device virtio-blk-device,drive=hd0

PfvM7t.png
注意选择OpenSSH Server
理论上如果顺利的话会自动出现登录界面,进入后如下
PfvlAP.md.png

4. 关机后重新启动命令

1
qemu-system-aarch64 -m 2048 -cpu cortex-a57 -smp 2 -M virt -bios QEMU_EFI.fd -nographic -device virtio-scsi-device -drive if=none,file=ubuntu16.04-arm64.qcow2,id=hd0 -device virtio-blk-device,drive=hd0  -netdev type=tap,id=net0 -device virtio-net-device,netdev=net0

5. 关于UEFI启动后第一项无法自动进入ubuntu解决方法

进入UEFI 界面,在uefi shell中输入exit后在Boot Maintenance Manager进入Boot Options,选择Add Boot Option 依次选择boot/efi/ubuntu/grubaa64.efi,并设置boot order,将添加的boot option放在第一个
PfvcjJ.md.png
Pfv2u9.md.png

6. 设置网络

Host:

1
sudo ifconfig tun0 192.168.0.1

Virtual machine:

1
2
sudo ifconfig eth0 up
sudo ifconfig eth0 192.168.0.2

7. 登录

在主机端ssh user-name@192.168.0.2输入密码即可登录

8. 参考文章

https://blog.csdn.net/chenxiangneu/article/details/78955462

在aarch64主机中使用qemu设置网络

发表于 2016-12-07 | 更新于 2018-11-25 | 分类于 aarch64 |

在上一篇文章《在aarch64主机中使用qemu启动虚机》,介绍了如何使用用户网络模式来启动虚拟机,但是这种模式有很多缺点,例如不能和其他虚机通信,不能使用ping测试通信等。所以在这篇中讲述如何使用tap模式来启动虚机。

该模型为私有虚拟局域网模型,只允许在本host机器上访问,不会影响到host机器网络.
tap网络

1. 安装libvirt daemon

1
yum install -y libvirt-daemon

2. 启动libvirtd

1
service libvirtd start

3. 使用新脚本启动镜像

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
#!/bin/sh

if [ $# -ne 1 ];then
echo "Usage: $0 network-mode[user|tap]"
echo "e.g.: $0 user"
echo " $0 tap"
exit
fi


if [ $1 == "user" ];then
# User Networking Mode
qemu-system-aarch64 -smp 4 -m 8092 -M virt -bios QEMU_EFI.fd -nographic \
-device virtio-blk-device,drive=image \
-drive if=none,id=image,file=xenial-server-cloudimg-arm64-uefi1.qcow2,format=qcow2 \
-device virtio-blk-device,drive=cloud \
-drive if=none,id=cloud,file=cloud.img,format=raw \
-netdev user,id=user0,hostfwd=tcp::2222-:22 -device virtio-net-device,netdev=user0 \
-enable-kvm -cpu host
elif [ $1 == "tap" ];then
# Tap Networking Mode [Private Virtual Network]
macaddress=52:54:00:4a:1e:d4
qemu-system-aarch64 -smp 4 -m 8092 -M virt -bios QEMU_EFI.fd -nographic \
-device virtio-blk-device,drive=image \
-drive if=none,id=image,file=xenial-server-cloudimg-arm64-uefi1.qcow2,format=qcow2 \
-device virtio-blk-device,drive=cloud \
-drive if=none,id=cloud,file=cloud.img,format=raw \
-device virtio-net-device,netdev=network0,mac=$macaddress \
-netdev tap,id=network0,ifname=tap0,script=qemu-ifup.sh,downscript=no \
-enable-kvm -cpu host
fi

依赖的qemu-ifup.sh

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#!/bin/sh

set -x

switch=virbr0

if [ -n "$1" ];then
# tunctl -u `whoami` -t $1 (use ip tuntap instead!)
ip tuntap add $1 mode tap user `whoami`
ip link set $1 up
sleep 0.5s
# brctl addif $switch $1 (use ip link instead!)
ip link set $1 master $switch
exit 0
else
echo "Error: no interface specified"
exit 1
fi

使用方法:

./start_vm.sh tap

4. 查看ip

1
sudo virsh net-dhcp-leases default

结果如下图:

Ph8QQU.md.png

然后使用ssh登录即可

在aarch64主机中使用Qemu启动虚拟机

发表于 2016-11-25 | 更新于 2018-11-25 | 分类于 aarch64 |

1. host

CentOS Linux release 7.2.1603 (AltArch) aarch64
kernel: 3.19.0-0.79.aa7a.aarch64

检查内核是否支持kvm

$dmesg | grep -i kvm
[ 0.364920] kvm [1]: GICH base=0x780c0000, GICV base=0x780e0000, IRQ=122
[ 0.365026] kvm [1]: timer IRQ3
[ 0.365039] kvm [1]: Hyp mode initialized successfully

2. 安装必须的包

1
sudo yum install -y qemu-system-aarch64

3. 下载uefi.img和QEMU_EFI.fd

1
2
wget https://releases.linaro.org/components/kernel/uefi-linaro/15.12/release/qemu64/QEMU_EFI.fd
wget https://mirrors.tuna.tsinghua.edu.cn/ubuntu-cloud-images/xenial/20161124/xenial-server-cloudimg-arm64-uefi1.img

4. 制作cloud.img

  • 创建cloud.txt文件,内容如下,其中ssh-rsa换做你本地的id_rsa.pub
1
2
3
4
5
6
7
8
9
#cloud-config

users:
- name: <your_username>
ssh-authorized-keys:
- ssh-rsa AAAAB3NzaC1y....
sudo: ['ALL=(ALL) NOPASSWD:ALL']
groups: sudo
shell: /bin/bash
  • 创建cloud.img
1
2
# 根据cloud.txt内容创建镜像cloud.img
$cloud-localds cloud.img cloud.txt

5. 启动镜像

1
2
3
4
5
6
7
qemu-system-aarch64 -smp 4 -m 8092 -M virt -bios QEMU_EFI.fd -nographic \
-device virtio-blk-device,drive=image \
-drive if=none,id=image,file=xenial-server-cloudimg-arm64-uefi1.img \
-device virtio-blk-device,drive=cloud \
-drive if=none,id=cloud,file=cloud.img \
-netdev user,id=user0,hostfwd=tcp::2222-:22 -device virtio-net-device,netdev=user0 \
-enable-kvm -cpu host

6. 登录

1
ssh -p 2222 <your_username>@localhost

成功后如下所示:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
[root@APM html]# ssh -p 2222 root@localhost
Welcome to Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-47-generic aarch64)

* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage

Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud

0 packages can be updated.
0 updates are security updates.


Last login: Fri Nov 25 12:22:42 2016 from 10.0.2.2
root@ubuntu:~# cat /etc/os-release
NAME="Ubuntu"
VERSION="16.04.1 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.1 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial

使用Qemu调试内核(host=aarch64)

发表于 2016-03-26 | 更新于 2018-11-25 | 分类于 aarch64 |

在前一篇文章http://blog.csdn.net/jefbai/article/details/44901447
已经介绍过主机为x86_64的情况下如何进行内核调试,但是如果主机换做AArch64呢??方法类似~

1、编译qemu-system-aarch64

git clone https://github.com/qemu/qemu
./configure && make -j8

随后将该目录添加到PATH变量并source ~/.bashrc

2、 下载并编译内核

git clone https://git.fedorahosted.org/git/kernel-arm64.git
make -j8 Image

3、安装gdb和gcc-c++
4、调试内核(本机,也可以通过网络)
qemu-system-aarch64 -machine virt -cpu cortex-a57 -machine ↪ type=virt -nographic -smp 1 -m 2048 -kernel arch/arm64/boot/ ↪ Image –append “console=ttyAMA0” -s -S
-s表示加入 gdb debug的功能,-S是让kernel停在第一行指令,不要往下跑。(可以写成脚本)
cd kernel-arm64
gdb ./vmlinux
target remote localhost:1234

run first scene
5、此时可以使用gdb进行内核调试了
例如

b start_kernel
c

kernel

创建aarch64 docker

发表于 2015-12-08 | 更新于 2018-11-25 | 分类于 aarch64 |

在官方的docker image中没有centos的image,所以我想自己做一个
参考资料:
https://wiki.centos.org/zh/Cloud/Docker

1. 拷贝ami-creator代码库

1
git clone [https://github.com/katzj/ami-creator](https://github.com/katzj/ami-creator)

2. 编译安装ami-creator

执行命令python setup.py build,发现提示以下错误

1
2
3
4
5
#python setup.py build
Traceback (most recent call last):
File "setup.py", line 21, in <module>
from ez_setup import use_setuptools
ImportError: No module named ez_setup

解决方法:

1
2
3
4
5
6
wget https://bootstrap.pypa.io/ez_setup.py
python ez_setup.py
cp ez_setup.py /usr/lib/python2.7/site-packages/
cd ami-creator/
python setup.py build
python setup.py install

此时运行ami-creator –help还是出现异常,提示

1
ImportError: No module named imgcreate

应该是缺少imgcreate文件??

1
yum install -y python-imgcreate

但是目前python-imgcreate不支持aarch64架构,所以需要添加patch,修改/usr/lib/python2.7/site-packages/imgcreate/live.py内容如下:

1
2
3
4
5
6
elif arch.startswith('arm'):
LiveImageCreator = LiveImageCreatorBase
elif arch.startswith('aarch64'):
LiveImageCreator = LiveImageCreatorBase
else:
raise CreatorError("Architecture not supported!")

此时ami-creator已经可以使用了,如下图所示:

PfXgGF.md.png

3. 克隆代码库sig-cloud-instance-build,并编译出image

git clone [https://github.com/CentOS/sig-cloud-instance-build,克隆制作docker image的ks文件,这个ks文件相比正常的ks文件,精简了体积,只保持系统到70M左右,果然很牛B

1
2
3
ami-creater -c centos7-arm64.ks//如果顺利会生成centos7-arm64-xxxx.img
img2tar.sh centos7-arm64-xxxx.img//会在/tmp目录下生成一个tar.bz2的文件
docker import /tmp/centos7-arm64-xxxx.img.tar.bz2 jefby/centos-arm64

4. docker制作

1
2
3
4
5
6
7
8
9
10
11
12
13
14
yum install -y libguestfs-tools
docker import centos-7-arm64.img.tar.bz2 jefby/centos-arm64
docker run -i -t jefby/centos-arm64 /bin/bash //获取到进入系统后的xxxx-id,或者使用docker ps -l方法找到image的xxxx-id
docker commit -m "init for centos arm64 images" -a "jefby" xxxx-id//保存container的修改到image
docker push jefby/centos-arm64//推送新的image
由于v1.8.0和v1.8.1之间可能有一些API发生了改变,所以需要重新编译docker
git remote add docker https://github.com/docker/docker/
git pull docker
git co v1.8.1
cd docker
./hack/make.sh dynbinary
设置环境变量将心编译的docker放到前面去:
export PATH=new-docker-path:$PATH
docker --version查看版本

继续 docker push jefby/centos-arm64将工作保存下来

123

jefby

记录点点滴滴

28 日志
5 分类
12 标签
GitHub Twitter
© 2018 jefby
由 Hexo 强力驱动 v3.7.1
|
主题 – NexT.Muse v6.4.0