2023年11月29日发(作者:)

LibTorchQt)学习笔记

LibTorch(Qt)使⽤笔记

1 qt 使⽤ libtorch

使⽤有两种⽅式,我都是在qt上使⽤,只介绍qt使⽤⽅法

1 下载编译好的直接使⽤(跟我现有开源库各种冲突 反正和opencv和itk不能同时⽤

2 下载源码⾃⼰编译

1⾸先说第⼀种⽅式

如果单纯使⽤libtorch 超容易官⽅提供好编译后的版本跟⾃⼰的对应起来下载直接⽤

qt使⽤⽅法cmake⾥

find_package(Torch REQUIRED)

include_directories( ${TORCH_INCLUDE_DIRS} )

target_link_libraries(

${PROJECT_NAME}

"${TORCH_LIBRARIES}"

)

TORCH_DIR写 /下载⽬录/share/cmake/Torch

测试

torch::Tensor tensor = torch::rand({2, 3});

std::cout << tensor << std::endl;

结果

2. 第⼆种编译⽅式

安装 Anaconda

下载直接点点点就⾏,就⼀个选项,是否帮忙设置环境,选择是就可以

安装后输⼊python试⼀下

我这样就是正常的

下载源码

git clone --recursive /pytorch/pytorch

国内⽹速下载不好,时间很久

cd pytorch

git submodule sync

git submodule update --init --recursive

⼜是很久很久

结束之后设置

export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}

python build --cmake-only

ccmake build # or cmake-gui build

我需要opencv ,搜索opencv mode⾥有个use opencv 勾选上,让后设置opencv地址(据说不⽀持opencv4以上版本,我⽤的3.16

让后我的显卡是a卡,use cuda取消

cudnn同样不能⽤ use cudnn取消

test反正我编译什么都不选 bulid test取消

让后congigure 和generate就可以了

当然了,可以不⽤cmake-gui设置

直接这样可以(如果⽤cuda就前两个设置成1

USE_CUDA=0 USE_MKLDNN=0 BUILD_TEST=0 python install

这个很快安装好就可以了

qt调⽤

cmake⾥

find_package(Torch REQUIRED)

3.

Building wheel torch-1.3.0a0+a024e1e

-- Building version 1.3.0a0+a024e1e

Traceback (most recent call last):

File "", line 759, in

build_deps()

File "", line 313, in build_deps

check_pydep('yaml', 'pyyaml')

Building wheel torch-1.3.0a0+a024e1e

-- Building version 1.3.0a0+a024e1e

cmake --build . --target install --config Release -- -j 12

No such file or directory

CMake Error: Generator: execution of make failed. Make command was: "/home/yx/anaconda3/bin/ninja" "-j" "12" "install"

Traceback (most recent call last):

File "", line 759, in

build_deps()

File "", line 321, in build_deps

利⽤vtk批量读取png图⽚

利⽤vtk三维重建

利⽤opencv中值滤波

itk批量读取dcm

vtk获取模型间隔

itk获取病⼈信息

gpu使⽤和程序运⾏状况,可以看到显存利⽤了40G

libtorch 就是把pytorch 搬到了c++端,⽅便算法实现商业化。基本操作流程就是python训练好模型,导出.pt模型c++程序 读取图⽚,传

⼊libtorch 根据上⼀步的模型进⾏预测训练结果导出成图⽚或其他信息⽤于显⽰

读取pt模型

torch::DeviceType device_type;

device_type = torch::kCUDA;

torch::jit::script::Module module0 =

torch::jit::load("/home/xxxx/⽂档/QT_work/xxxx/bin/");

torch::Device device0(device_type, static_cast<short>(

this_lv_struct_.gpu));

module0.to(device0);

torch::cuda::is_available();

opencv图⽚标准化

cv::Mat img;

img.create(512, 512, CV_32FC1);

for (int nr = 0; nr < 512; nr++) {

float *outData = img.ptr<float>(nr);

for (int mc = 0; mc < 512 ; mc++) {

float *pixel = static_cast<float * >

(this_lv_struct_.imagedata->GetScalarPointer(

mc, nr, this_lv_struct_.begin_dcm + dcm * 5 + i));

outData[mc] = *pixel;

}

}

模型训练

at::Tensor tensor_image = torch::from_blob(img.data,

{ 1, 512, 512, 1 }, options);

tensor_image = tensor_image.permute({ 0, 3, 1, 2 }).to(device0);

pred[i] = module0.forward({tensor_image}).toTensor()[0][0];

导出图⽚:这⾥最傻,为什么这么多at::Tensor就是为了减少从显存到内存的次数,和增加每次交换数据量。这⾥肯定有更好的办法,我没

找到,谁知道⾮常感谢可以告诉我

for (int i = 0; i < 5; i++) {

at::Tensor pred_00 = pred[i];

for (int w = 0; w < 512; ++w) {

at::Tensor pred_000 = pred_00[w];

for (int jj = 0; jj < 512; jj++) {

at::Tensor pred_0000 = pred_000[jj];

this_lv_struct_.testshortarr[

512 * w + jj + i * 262144 +

dcm * 262144 * 5 + this_lv_struct_.begin_array]

= 255 * (*(pred_0000.data<float>()) > 0.5);

}

}

}

数组转stl模型

qDebug() << "模型⽣成中...";

vtkNew<vtkMarchingCubes> marchingcubes ;

vtkNew<vtkSmoothPolyDataFilter> smoothfilter;

vtkNew<vtkSTLWriter> vtk_writer_stl;

vtkNew<vtkImageData>reader_data;

vtkDoubleArray *tempimarr2 = vtkDoubleArray::New();

vtkNew<vtkImageCast> imagedata;

vtkNew<vtkMassProperties> massProperties;

tempimarr2->SetVoidArray(testshortarr, datasize, 1);

reader_data ->SetDimensions(512, 512, num);

reader_data->SetSpacing(Spacing[0] * 100, Spacing[1] * 100, Spacing[2] * 100);

reader_data->GetPointData()->SetScalars(tempimarr2);

reader_data->Modified();

imagedata->SetInputData(reader_data);

imagedata->SetOutputScalarTypeToFloat();

marchingcubes->SetInputConnection(imagedata->GetOutputPort());

marchingcubes->SetValue(0, 1);

massProperties->SetInputConnection(marchingcubes->GetOutputPort());

massProperties->Update();

qDebug() << "耗时" << time.elapsed() / 1000.0 << "s" ;

qDebug() << QString("体积 %1 ⽴⽅厘⽶").arg(massProperties->GetVolume() / 1000000000, 0, 'f', 2);

qDebug() << " ⽂件保存在输⼊⽬录";

qDebug() << "----------------end----------------";

vtk_writer_stl->SetInputConnection(marchingcubes->GetOutputPort());

vtk_writer_stl->SetFileName(QString(filepath + "/").toLocal8Bit().data());

vtk_writer_stl->Write();

qApp->exit();