resnet18 adapt to 1-d input in matlab

To adapt ResNet-18 to 1-D input in Matlab, we need to modify the last few layers of the ResNet-18 architecture. In the original ResNet-18 architecture, the input is a 3-channel 2-D image, and the first layer convolutional layer has a kernel size of 7x7.

For 1-D input, we can replace the first layer with a 1-D convolutional layer with an appropriate kernel size. We can also replace the max-pooling layer immediately following the first convolutional layer with an average pooling layer, since pooling two adjacent features in a 1-D input may lose important information.

Here is an example code to modify ResNet-18 to accept 1-D input in Matlab:

main.m
inputSize = 512; % size of the 1-D input
numClasses = 10; % number of output classes

% Load pre-trained ResNet-18
net = resnet18();

% Remove the last layer
lgraph = layerGraph(net);
lgraph = removeLayers(lgraph, {'fc1000', 'prob', 'ClassificationLayer_predictions'});

% Add a 1-D convolutional layer as the new input layer
newLayers = [
    imageInputLayer([1 inputSize 1], 'Name', 'input')
    convolution2dLayer([1 7], 64, 'Stride', [1 2], 'Padding', [0 3], 'Name', 'conv1')
    batchNormalizationLayer('Name', 'bn_conv1')
    reluLayer('Name', 'relu_conv1')
    averagePooling2dLayer([1 3], 'Stride', [1 2], 'Padding', [0 1], 'Name', 'pool1')
];
lgraph = addLayers(lgraph, newLayers);

% Connect the new layers to the existing layers
lgraph = connectLayers(lgraph, 'input', 'conv1');

% Add the new classification layers
newLayers = [
    fullyConnectedLayer(numClasses, 'Name', 'fc10')
    softmaxLayer('Name', 'softmax')
    classificationLayer('Name', 'classoutput')
];
lgraph = addLayers(lgraph, newLayers);
lgraph = connectLayers(lgraph, 'pool5', 'fc10');

% Plot the modified ResNet-18 architecture
plot(lgraph);

% Freeze the weights of the earlier layers and train only the new layers
net = trainNetwork(XTrain, YTrain, lgraph, options);
1252 chars
38 lines

In the above code, we first load the pre-trained ResNet-18 and remove the last layer. Then, we add a 1-D convolutional layer as the new input layer, followed by a batch normalization layer, a ReLU layer, and an average pooling layer. We then connect the new layers to the existing layers and add the new classification layers. Finally, we freeze the weights of the earlier layers and train only the new layers.

Note that the specific hyperparameters used (such as kernel size, stride, and padding of the new layers) may need to be tuned for your specific task and input size.

related categories

gistlibby LogSnag