0% found this document useful (0 votes)
17 views2 pages

Mac1 Py

The document outlines a TensorFlow script for training a convolutional neural network (CNN) on a dataset of Tamil syllabic letters. It includes steps for checking GPU availability, creating a dataset, building the model architecture with convolutional and dense layers, compiling the model, and training it for 50 epochs. Finally, it saves the model structure in JSON format and the trained weights in an H5 file.

Uploaded by

ARVIND VENKAT
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views2 pages

Mac1 Py

The document outlines a TensorFlow script for training a convolutional neural network (CNN) on a dataset of Tamil syllabic letters. It includes steps for checking GPU availability, creating a dataset, building the model architecture with convolutional and dense layers, compiling the model, and training it for 50 epochs. Finally, it saves the model structure in JSON format and the trained weights in an H5 file.

Uploaded by

ARVIND VENKAT
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

import tensor ow as tf

from tensor ow.keras.models import Sequential


from tensor ow.keras.layers import Conv2D, MaxPooling2D, Dense, Dropout, Flatten
from tensor ow.keras.optimizers import Adam
import os

os.environ["CUDA_VISIBLE_DEVICES"] = "0"

# Check GPU availability


print("Num GPUs Available: ", len(tf.con g.experimental.list_physical_devices('GPU')))

# Enable memory growth to avoid GPU memory allocation errors


for gpu in tf.con g.experimental.list_physical_devices('GPU'):
tf.con g.experimental.set_memory_growth(gpu, True)

# Create TensorFlow Dataset for train data


train_dataset = tf.keras.preprocessing.image_dataset_from_directory(
'D:/my projects/Machine Learning/rajalingam/Tamil_Syllabic_Letters/train',
label_mode='categorical', # Categorical for multi-class classi cation
image_size=(256, 256),
batch_size=256 # Increased batch size
)

# Prefetching and caching for train dataset


train_dataset = train_dataset.prefetch(tf.data.experimental.AUTOTUNE).cache()

# Create model structure


train_model = Sequential()

# Add convolutional layers


train_model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=(256, 256, 3)))
train_model.add(Conv2D(64, kernel_size=(3, 3), activation='relu'))
train_model.add(MaxPooling2D(pool_size=(2, 2)))
train_model.add(Dropout(0.25))

train_model.add(Conv2D(128, kernel_size=(3, 3), activation='relu'))


train_model.add(MaxPooling2D(pool_size=(2, 2)))
train_model.add(Conv2D(128, kernel_size=(3, 3), activation='relu'))
train_model.add(MaxPooling2D(pool_size=(2, 2)))
train_model.add(Dropout(0.25))

# Flatten and add dense layers


train_model.add(Flatten())
train_model.add(Dense(1024, activation='relu'))
train_model.add(Dropout(0.5))
train_model.add(Dense(12, activation='softmax')) # 12 output units for 12 classes with softmax
activation

# Compile the model


train_model.compile(loss='categorical_crossentropy', optimizer=Adam(learning_rate=0.0001),
metrics=['accuracy'])

# Train the model


emotion_model_info = train_model. t(
train_dataset,
epochs=50,
)

# Save model structure in JSON le


model_json = train_model.to_json()
fi
fl
fl
fl
fl
fi
fi
fi
fi
fi
with open("train_model.json", "w") as json_ le:
json_ le.write(model_json)

# Save trained model weights in .h5 le


train_model.save_weights('train_model.h5')
fi
fi
fi

You might also like