0% found this document useful (0 votes)
18 views20 pages

Astro AI

Uploaded by

Yan Barros
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views20 pages

Astro AI

Uploaded by

Yan Barros
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Yan E.

Barros

PAGE 01
PAGE 02
PAGE 03
https://lightkurve.github.io/lightkurve/index.html
Using Numpy, we can transform the
flux data into a 1d array.
The shape of data should be similar
to: (64707,) when reading as a numpy
array.

PAGE 04
Using Numpy again, we can visualize
the fast fourier transform of the flow
data to understand how the signal
works for this temporal series data.

PAGE 05
From this, we are able, for example,
to visualize this data as an image.
In this case, the shape of each
image, now, is (254,254).

PAGE 06
LSTM networks (Long Short-Term Memory) are a
type of recurrent neural network (RNN) specifically
designed to handle sequential data, such as time
series. They are widely used in this context due to
their ability to learn and remember long-term
patterns in data.

PAGE 07
Unlike traditional RNNs, LSTMs were created to
overcome the vanishing gradient problem, which
makes it difficult to learn long-term dependencies
in long sequences. This is achieved because LSTMs
have a special cell architecture with "gates" (input,
forget, and output) that control the flow of
information. These gates allow LSTMs to store and
discard information over time, retaining only what
is relevant for making predictions.

PAGE 08
class LSTM(nn.Module):
def __init__(self, input_size, hidden_size, num_layers, output_size):
super(LSTM, self).__init__()
self.hidden_size = hidden_size
self.num_layers = num_layers
self.lstm = nn.LSTM(input_size, hidden_size, num_layers, batch_first=True)
self.fc = nn.Linear(hidden_size, output_size)

def forward(self, x):


h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size)
c0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size)
out, _ = self.lstm(x, (h0, c0))
out = self.fc(out[:, -1, :])

PAGE 09
CNNs (Convolutional Neural Networks) are a type
of neural network especially effective for
processing data with grid-like structures, such as
images. They are widely used in computer vision
due to their ability to identify spatial patterns, such
as edges, textures, and shapes, capturing complex
features across different regions of an image.

PAGE 10
The architecture of CNNs uses convolutional layers,
which apply filters (or kernels) to extract these
features. Each filter highlights a specific pattern in
the image, and successive layers capture deeper
and more abstract relationships. CNNs also use
pooling layers to reduce dimensionality and
emphasize the most relevant features, making
them robust and efficient for tasks like
classification, object detection, and image
segmentation.

PAGE 11
class SimpleCNN(nn.Module):
def __init__(self):
super(SimpleCNN, self).__init__()
self.conv1 = nn.Conv2d(in_channels=1, out_channels=16, kernel_size=3, stride=1, padding=1)
self.conv2 = nn.Conv2d(in_channels=16, out_channels=32, kernel_size=3, stride=1, padding=1)
self.conv3 = nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3, stride=1, padding=1)
self.fc1 = nn.Linear(64 * 8 * 8, 128)
self.fc2 = nn.Linear(128, 10)
self.pool = nn.MaxPool2d(kernel_size=2, stride=2, padding=0)

def forward(self, x):


x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
x = x.view(-1, 64 * 8 * 8)

PAGE 12
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
Transformers are a neural network architecture
designed to process sequences, such as text, by
capturing long-range dependencies. They are
widely used in natural language processing (NLP)
and other fields involving sequential data due to
their ability to model complex contexts.

PAGE 13
The main innovation of Transformers is the self-
attention mechanism, which allows each part of a
sequence to "pay attention" to other relevant parts,
regardless of their distance within the sequence.
This mechanism enables Transformers to capture
long-range relationships and rich contexts without
the need for sequential processing, as seen in
RNNs.

PAGE 14
class TransformerModel(nn.Module):
def __init__(self, vocab_size, d_model, nhead, num_encoder_layers, num_decoder_layers,
num_classes, dim_feedforward=512, dropout=0.1):
super(TransformerModel, self).__init__()
self.embedding = nn.Embedding(vocab_size, d_model)
self.positional_encoding = PositionalEncoding(d_model, dropout)
self.transformer = nn.Transformer(d_model=d_model, nhead=nhead,
num_encoder_layers=num_encoder_layers,
num_decoder_layers=num_decoder_layers,
dim_feedforward=dim_feedforward, dropout=dropout)
self.fc_out = nn.Linear(d_model, num_classes)

def forward(self, src, tgt):


src = self.embedding(src) * torch.sqrt(torch.tensor(self.embedding.embedding_dim,
dtype=torch.float32))
tgt = self.embedding(tgt) * torch.sqrt(torch.tensor(self.embedding.embedding_dim,
dtype=torch.float32))
src = self.positional_encoding(src)
tgt = self.positional_encoding(tgt)
output = self.transformer(src, tgt)

PAGE 15
output = output[-1, :, :]
output = self.fc_out(output)
return output
class PositionalEncoding(nn.Module):
def __init__(self, d_model, dropout=0.1, max_len=5000):
super(PositionalEncoding, self).__init__()
self.dropout = nn.Dropout(p=dropout)
pe = torch.zeros(max_len, d_model)
position = torch.arange(0, max_len, dtype=torch.float).unsqueeze(1)
div_term = torch.exp(torch.arange(0, d_model, 2).float() * -(torch.log(torch.tensor(10000.0)) / d_model))
pe[:, 0::2] = torch.sin(position * div_term)
pe[:, 1::2] = torch.cos(position * div_term)
pe = pe.unsqueeze(0)
self.register_buffer('pe', pe)

def forward(self, x):


x = x + self.pe[:, :x.size(1)]
return self.dropout(x)

PAGE 16
Physics-Informed Neural Networks (PINNs) are a
type of neural network that integrates physical
laws into the learning process. Unlike traditional
neural networks, which learn purely from data,
PINNs are designed to solve problems in fields such
as fluid dynamics, heat transfer, or structural
mechanics by incorporating differential equations
that describe the underlying physics.

PAGE 17
The key innovation of PINNs is the incorporation of
the governing partial differential equations (PDEs)
or other physical constraints as a loss function
during training. This allows the network to learn not
only from data but also from the physical laws that
govern the system. As a result, PINNs can provide
more accurate solutions, especially in scenarios
where data is sparse or difficult to obtain.

PAGE 18
class PINN(nn.Module):
def __init__(self, layers):
super(PINN, self).__init__()
self.layers = nn.ModuleList()
for i in range(len(layers) - 1):
self.layers.append(nn.Linear(layers[i], layers[i+1]))

def forward(self, t):


for layer in self.layers[:-1]:
t = torch.tanh(layer(t))
return self.layers[-1](t)

def gravitation(t, model, G, M):


r = model(t)
r_dot = torch.autograd.grad(r, t, grad_outputs=torch.ones_like(r), create_graph=True)[0]
r_ddot = torch.autograd.grad(r_dot, t, grad_outputs=torch.ones_like(r_dot), create_graph=True)[0]
gravitational_force = - G * M / (r**2)
return r_ddot - gravitational_force

PAGE 19
def pinn_loss(t, model, G, M):
residual = gravitation(t, model, G, M)
return torch.mean(residual**2)

You might also like