Full-Stack Offline App Development
Full-Stack Offline App Development
This architectural choice leverages the strengths of each technology: Tauri provides a
lightweight, secure, and performant way to bundle web applications into native
desktop and mobile experiences.1 Next.js, configured for Static Site Generation (SSG),
offers a familiar and productive environment for building UIs with web technologies
like TypeScript, Shadcn UI, Radix UI, and Tailwind CSS.3 The Django backend,
supported by Django REST Framework, PostgreSQL, Celery, and Django Channels,
delivers a scalable and feature-rich API for data management, asynchronous task
processing, and potential real-time communication.5
This document serves as an in-depth guide, detailing the setup, configuration, and
integration of these diverse technologies. It will cover frontend and backend
development, secure authentication, strategies for local data storage and offline
synchronization, leveraging Rust for performance-critical tasks, and finally, building
and packaging the application. The aim is to provide a clear pathway for developing
sophisticated, offline-first applications that meet contemporary user expectations
across multiple platforms.
| 'localhost'}:3000`,
};
module.exports = nextConfig;
```
The `images: { unoptimized: true }` setting is necessary because the default Next.js
image optimization relies on a server, which isn't available in the Tauri SSG context.[3,
12] If image optimization is required, a custom loader (e.g., for a CDN like Cloudinary
or an image processing service provided by the Django backend) must be configured
in `next.config.js`.[13] The `assetPrefix` configuration helps resolve assets correctly
during development when the Next.js dev server is running on a different port.[12]
The CLI will ask several questions, including the app name, window title, and the
frontend development server URL (which will be the Next.js dev server, typically
http://localhost:3000) and the location of your frontend assets after building
(which will be ../out relative to src-tauri, or the absolute path to your Next.js out
directory).4 This creates a src-tauri directory containing the Rust backend code.
2. Configure tauri.conf.json: This file, located in src-tauri/, is central to
configuring the Tauri application. Key settings include 3:
○ build:
■ devUrl: The URL of the Next.js development server (e.g.,
http://localhost:3000). Tauri loads content from this URL during
development.
■ beforeDevCommand: The command to start the Next.js development
server (e.g., npm run dev or pnpm dev). Tauri executes this before
starting its own development window.
■ beforeBuildCommand: The command to build the Next.js frontend for
production (e.g., npm run build or pnpm build). Tauri executes this before
building the final application.
■ frontendDist: The path to the directory containing the statically exported
Next.js assets (e.g., ../out).
○ tauri > bundle > identifier: A unique bundle identifier for the application (e.g.,
com.example.myapp). This must be changed from the default value for
release builds to avoid errors.3
○ tauri > allowlist: This object defines which Tauri API modules and their specific
functions the frontend JavaScript is allowed to access. For an offline-capable
application, permissions for file system access, SQL database interaction,
HTTP requests (if Rust makes them), and potentially secure storage
(keychain, stronghold) will be necessary. For example, for the file system
plugin:
JSON
"allowlist": {
"fs": {
"all": false, // Be specific
"readFile": true,
"writeFile": true,
"readDir": true,
"createDir": true,
"removeFile": true,
"removeDir": true,
"exists": true,
"scope": // Example scope
},
"http": { // If Rust needs to make HTTP calls
"all": true, // Or specific scope
"scope": ["https://api.example.com/*"]
},
"sql": { // For tauri-plugin-sql
"all": true // Or specific permissions like "execute", "select"
}
}
And in tauri.conf.json:
JSON
"tauri": {
"capabilities": {
"default": {
"windows": ["main"], // Apply to the main window
"permissions": ["default"] // Reference the default.json capability set
}
},
//...
}
Developing for mobile involves running commands like tauri android dev or tauri ios
dev.11 Debugging on mobile can be more involved, utilizing Safari Developer Tools for
iOS and Chrome DevTools for Android remote debugging.11 The documentation for
mobile development, while improving, has been noted as somewhat sparse, and
developers might encounter scenarios requiring more troubleshooting compared to
desktop builds.33 It's important to set realistic expectations: while mobile support is a
powerful feature, it adds a layer of complexity to the development and build process.
fn main() {
tauri::Builder::default()
.invoke_handler(tauri::generate_handler![greet])
.run(tauri::generate_context!())
.expect("error while running tauri application");
}
○ Commands are invoked from JavaScript using the invoke function from the
@tauri-apps/api/tauri package.4
TypeScript
// In a Next.js component
import { invoke } from '@tauri-apps/api/tauri';
○ JS to Rust: JavaScript can also emit events that Rust can listen to.
○ Listening in JS: The frontend listens for events using listen from @tauri-
apps/api/event.34
TypeScript
// In a Next.js component
import { listen } from '@tauri-apps/api/event';
// To stop listening:
// unlisten();
Choosing between commands and events depends on the nature of the interaction:
commands for direct function calls and responses, and events for broader
notifications or asynchronous updates.
As mentioned in Section 3.2, the src-tauri directory must be added to the exclude
array in tsconfig.json to prevent the TypeScript compiler from attempting to process
Rust files.4
5. Import this global CSS file in the root layout (app/layout.tsx). Tailwind CSS v4
introduced some changes, such as a CSS-first configuration, which might
alter setup steps if using that version.15
● Shadcn UI and Radix UI:
○ Shadcn UI is not a traditional component library. Instead, components are
added to the project using its CLI. These components are built using Radix UI
primitives for accessibility and core functionality, and styled with Tailwind
CSS.16
○ Installation: First, ensure Tailwind CSS is configured. Then, initialize Shadcn
UI in the Next.js project:
Bash
npx shadcn-ui@latest init
This command will ask configuration questions, including the location of
tailwind.config.js, global CSS file, and path aliases.
○ Adding Components: Use the CLI to add specific components:
Bash
npx shadcn-ui@latest add button card dialog
This copies the component source code into the specified directory (e.g.,
components/ui/), allowing full customization.16
○ Path Aliases: Configure path aliases (e.g., @/components) in tsconfig.json
and next.config.js (if needed by other tools) for cleaner import paths. Shadcn
UI setup often helps with this.
JSON
// tsconfig.json
{
"compilerOptions": {
"baseUrl": ".",
"paths": {
"@/*": ["./*"],
"@/components/*": ["./components/*"]
}
//...
}
}
The "copy-paste" model of Shadcn UI offers immense flexibility but means that
updates to the upstream Shadcn UI components require manual re-adding or
diffing to incorporate changes.35 This is a trade-off for the level of control
provided.
● Best Practices for Component Design with Static Export Compatibility:
○ Since the Next.js app is statically exported for Tauri (output: 'export'),
components must be designed to render correctly in this environment.
○ React Server Components (RSCs) within the App Router are rendered at build
time when using static export, producing static HTML and JSON payloads for
client-side navigation.13 However, dynamic server-side functions (like those
using cookies() or headers() from next/headers, or server actions that modify
data) are not available at runtime in a statically exported app.36
○ Data fetching for dynamic content within components should occur client-
side. This can be done using useEffect hooks to call the Django API, or by
managing data fetching through a state management library like Zustand.
○ Components should primarily be client components ('use client') if they
involve interactivity or rely on browser APIs.
○ For components that need data at initial render, this data can be passed as
props if the page itself is statically generated with that data (though complex
data fetching at build time for many pages can slow down builds). More
commonly, components will fetch their own data client-side or display loading
states/skeletons until data is available.36
○ Radix UI components, being unstyled primitives, generally work well in static
environments as they focus on behavior and accessibility, leaving styling to
Tailwind CSS.
interface SyncState {
isOnline: boolean;
syncQueue: Array<{ id: string; type: 'create' | 'update' | 'delete'; payload: any }>;
syncStatus: 'idle' | 'syncing' | 'error';
setOnlineStatus: (status: boolean) => void;
addToQueue: (item: { id: string; type: 'create' | 'update' | 'delete'; payload: any }) => void;
processQueue: () => Promise<void>; // Placeholder for actual sync logic
setSyncStatus: (status: 'idle' | 'syncing' | 'error') => void;
}
function MyComponent() {
const isOnline = useSyncStore((state) => state.isOnline);
const addToQueue = useSyncStore((state) => state.addToQueue);
Zustand provides a pragmatic and efficient way to manage the dynamic state
required by an offline-first application, complementing the static nature of the Next.js
frontend within Tauri.
This creates the necessary database tables for Django's built-in apps and any
models defined.19
2. Create .env file: In the root of the Django project (alongside manage.py), create
a .env file:
Code snippet
#.env
DEBUG=True
SECRET_KEY='your_django_secret_key_here'
DATABASE_NAME='myappdb'
DATABASE_USER='myappuser'
DATABASE_PASSWORD='securepassword'
DATABASE_HOST='localhost'
DATABASE_PORT='5432'
SECRET_KEY = os.getenv('SECRET_KEY')
DEBUG = os.getenv('DEBUG') == 'True' # Convert string 'True' to boolean
.env
venv/
pycache/
*.pyc
```
5.3. Building REST APIs with Django REST Framework (DRF)
DRF simplifies the creation of web APIs.
1. Define Django Models: In api_app/models.py, define the data models. For
applications with offline synchronization, these models should include fields to
support this, such as created_at, updated_at (or a specific
last_modified_server_timestamp), is_deleted, and deleted_at for soft deletes. If
using a library like django-rest-offlinesync, models might inherit from a base
TrackedModel that provides these fields.42
Python
# api_app/models.py
from django.db import models
from django.utils import timezone
class SyncableModel(models.Model):
# Example fields for a generic syncable item
title = models.CharField(max_length=255)
content = models.TextField(blank=True)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True) # Server-side last modification
is_deleted = models.BooleanField(default=False)
deleted_at = models.DateTimeField(null=True, blank=True)
def soft_delete(self):
self.is_deleted = True
self.deleted_at = timezone.now()
self.save()
class NoteSerializer(serializers.ModelSerializer):
class Meta:
model = Note
fields = '__all__' # Or specify fields: ['id', 'title', 'content', 'updated_at', 'is_deleted']
read_only_fields = ('created_at', 'deleted_at') # Fields not expected from client on write
class NoteViewSet(viewsets.ModelViewSet):
queryset = Note.objects.filter(is_deleted=False) # Exclude soft-deleted items by
default
serializer_class = NoteSerializer
router = DefaultRouter()
router.register(r'notes', NoteViewSet, basename='note')
urlpatterns = router.urls
Python
# core_project/urls.py
from django.contrib import admin
from django.urls import path, include
urlpatterns =
This setup provides a solid foundation for the Django backend, ready for
implementing authentication and the specific API endpoints required for offline data
synchronization.
REST_FRAMEWORK = {
'DEFAULT_AUTHENTICATION_CLASSES': (
'rest_framework_simplejwt.authentication.JWTAuthentication',
),
# Add other DRF settings like pagination, permissions as needed
}
'ALGORITHM': 'HS256',
'SIGNING_KEY': SECRET_KEY, # Uses Django's SECRET_KEY
'VERIFYING_KEY': None,
'AUDIENCE': None,
'ISSUER': None,
'JWK_URL': None,
'LEEWAY': 0,
'AUTH_TOKEN_CLASSES': ('rest_framework_simplejwt.tokens.AccessToken',),
...[source](https://blog.stackademic.com/jwt-custom-authentication-for-
django-application-aac608ae5f63) # Not used if not using sliding tokens
'SLIDING_TOKEN_REFRESH_LIFETIME': timedelta(days=1), # Not used if not using sliding
tokens
}
urlpatterns = [
path('admin/', admin.site.urls),
path('api/', include('api_app.urls')),
path('auth/', include('djoser.urls')),
path('auth/', include('djoser.urls.jwt')), # Provides /jwt/create/, /jwt/refresh/, /jwt/verify/
]
For a Tauri application, leveraging the **OS Keychain** is the most secure and
recommended approach for storing JWTs. This isolates the tokens from the
webview's JavaScript environment, mitigating XSS risks. The `tauri-plugin-keychain`
[45] or the `keyring` Rust crate (exposed via Tauri commands [46]) can be used.
* **Using `tauri-plugin-keychain`:**
* Install: `cargo add tauri-plugin-keychain` in `src-tauri/Cargo.toml` and `pnpm
add tauri-plugin-keychain` in the frontend.
* Initialize the plugin in `src-tauri/src/main.rs`.
* JS API: `saveItem(key, value)`, `getItem(key)`, `removeItem(key)`.[45]
```typescript
// Example usage in frontend
import { saveItem, getItem, removeItem } from 'tauri-plugin-keychain';
try {
const response = await fetch('/auth/jwt/refresh/', { // Ensure this path is proxied or
absolute
method: 'POST',
headers:{ 'Content-Type': 'application/json' },
body: JSON.stringify({ refresh: storedRefreshToken }),
});
if (!response.ok) throw new Error('Refresh failed');
const data = await response.json();
await saveItem('accessToken', data.access); // saveItem from tauri-plugin-keychain
if (data.refresh) { // If server rotates refresh tokens
await saveItem('refreshToken', data.refresh);
}
data.access;
return
} catch (error) {
console.error('Token refresh error:', error);
// Handle logout: remove tokens, redirect
await removeItem('accessToken'); // removeItem from tauri-plugin-keychain
await removeItem('refreshToken');
return null;
}
}
The choice of local storage should be guided by the data's nature: its structure,
sensitivity, query requirements, and whether it needs to be accessed primarily by
JavaScript or Rust. A hybrid approach is often optimal:
● Core Application Data (Structured, Relational, Synced): Tauri SQL Plugin with
SQLite.
● User Preferences, Simple Configs: Tauri FS Plugin (JSON/text files).
● Sensitive Secrets (Tokens, Keys): OS Keychain (via tauri-plugin-keychain) or
Tauri Stronghold Plugin.
● Sync Queue, Temporary Frontend State: IndexedDB (possibly via localForage
or integrated with Zustand's persistence).
Table: Offline Storage Options Comparison (Tauri Plugins vs. Web Standards)
Last Write Wins The latest Simple to Potential data Simple data
(LWW) change (based implement. loss if not types where
on timestamp) carefully losing an
overwrites managed. intermediate
others. update is
acceptable; or
when one
source is
authoritative.
Server Wins Server's version Ensures server Client's offline When server
always data integrity; work might be data is
overwrites simpler client lost without considered the
client's version logic. notice. absolute source
in case of of truth.
conflict.
Client Wins Client's version Preserves user's Can corrupt Rarely ideal for
always offline work. server data if shared data;
overwrites client data is perhaps for
server's version. stale or user-specific
incorrect. settings synced
from client.
○ Zustand can be used to manage the state related to this queue in the UI (e.g.,
number of pending items, current sync operation status, errors).37 It can also
trigger queue processing.
● Implementing API Calls for Sync 18:
A dedicated "Sync Manager" module in the Next.js/TypeScript codebase should
handle this.
○ Outgoing Sync (Pushing Local Changes):
1. Triggered when online (e.g., by network status change, periodically, or
manually).
2. Retrieve items from the sync queue (e.g., from IndexedDB).
3. For each item:
■ CREATE: POST item.payload to /api/{item.modelType}/. On success
(e.g., 201 Created), the server response should include the full object
with its new server_id and updated_at timestamp. Update the local
record with server_id, new updated_at, set sync_status to 'synced',
and remove the item from the queue.
■ UPDATE: PUT or PATCH item.payload to
/api/{item.modelType}/{item.serverId}/, including an at:
item.timestamp parameter/header. On success (e.g., 200 OK), the
server response should include the updated object with its new
updated_at. Update the local record, set sync_status to 'synced', and
remove from queue.
■ DELETE: DELETE to /api/{item.modelType}/{item.serverId}/, potentially
including an at: item.timestamp parameter/header. On success (e.g.,
204 No Content), hard-delete the local record (as the server now has
it as soft-deleted) and remove from queue.
4. Handle Server Responses:
■ Success (2xx): Process as described above.
■ Conflict (409 - for UPDATE/DELETE): Trigger client-side conflict
resolution logic (see below). Do not remove from queue until resolved.
■ Other Errors (4xx, 5xx): Implement retry logic (e.g., exponential
backoff), increment attempts. If max attempts reached, mark item as
failed and notify user or log.
○ Incoming Sync (Pulling Server Changes):
1. Triggered when online.
2. For each synced model type, retrieve the
last_successful_server_sync_timestamp from local preferences (e.g.,
stored using Tauri FS plugin or IndexedDB).
3. Fetch New/Updated Records: Call server list endpoints: GET
/api/{modelType}/?since=<last_successful_server_sync_timestamp>.
■ The server returns records modified after that timestamp, along with a
new request_timestamp (or until_timestamp).
■ Iterate through returned records:
■ If a record's server_id exists locally: Compare server updated_at
with local updated_at. If server is newer and local sync_status is
'synced', update the local record. If local sync_status is 'modified'
or 'new', this is a conflict (see client-side conflict resolution).
■ If server_id does not exist locally: Insert as a new local record with
sync_status = 'synced'.
■ After processing all records, store the server's request_timestamp as
the new last_successful_server_sync_timestamp for this model type.
4. Fetch Soft-Deleted Records: Call deleted object endpoints: GET
/api/{modelType}/deleted/?
since=<last_successful_deleted_sync_timestamp>.
■ Server returns IDs of soft-deleted items and a new
request_timestamp.
■ For each ID, hard-delete the corresponding record from the local
database.
■ Store the server's request_timestamp as the new
last_successful_deleted_sync_timestamp.
■ If the server returns 206 Partial Content 42: This indicates the server's
list of deleted items might be incomplete (due to its own cleanup of
old soft-deletes). The client must then:
■ Fetch the full list of active record IDs from the server (GET
/api/{modelType}/?fields=id).
■ Compare this list with the IDs in the local database. Any local ID
not present in the server's list (and not currently in the outgoing
'CREATE' queue) must have been deleted on the server. Hard-
delete these locally. This is a more expensive fallback. The client
must carefully manage timestamps to ensure it doesn't miss
updates or re-process data unnecessarily. It's important that the
client only updates its last_successful_server_sync_timestamp
after successfully processing all data from the server for that sync
cycle.
● Client-Side Conflict Detection and Resolution 42:
○ A conflict occurs when the client tries to PUSH an update (or delete) for a
record, and the server returns a 409 Conflict because the record's
updated_at timestamp on the server is different from the at timestamp sent
by the client. This means the server's version changed while the client was
offline or before the client's change was synced.
○ Another type of conflict occurs during an incoming PULL sync: the client
fetches an updated record from the server, but finds that the local version of
that same record has also been modified offline (its sync_status is 'modified'
or 'new').
○ Resolution Steps upon 409 Conflict from server (or detected during
PULL):
1. Keep the local changes (from the sync queue item or dirty local record)
temporarily.
2. Fetch the latest version of the conflicting item from the server (GET
/api/{modelType}/{serverId}/).
3. Apply a chosen strategy (from Table in 7.2):
■ LWW (Server Wins): Discard local changes. Update the local
database with the server's version. Mark the local record as 'synced'.
Remove the original operation from the sync queue. Notify the user
that their offline changes were overridden.
■ LWW (Client Wins - Use with extreme caution): Resubmit the local
changes to the server, this time potentially without the at parameter
or with a flag to force overwrite (if the API supports it). This is
generally risky as it can lead to data loss on the server.
■ Merge: If the data is structured in a way that allows merging (e.g.,
adding comments to a post, where comments are separate entities or
an array), attempt to merge the local changes with the server version.
This requires custom logic per model type. After merging, PUT/PATCH
the merged version to the server (again, with the server's latest
updated_at as the new at parameter).
■ Create Copy/Duplicate 42: Save the client's conflicting version as a
new local record (e.g., "Note Title (conflict 1)") and mark it for creation
('new'). Then, apply the server's version to the original local record.
The user can then manually reconcile.
■ Notify User for Manual Resolution: Store both the server version
and the client's local version. Update the UI to show the conflict and
provide tools for the user to compare and choose which version to
keep, or how to merge them. This is the most robust for complex data
but requires significant UI/UX work. The chosen conflict resolution
strategy can be global or per-model-type. It's essential to log
conflicts and resolution actions for debugging and auditing.
8.1. Django Celery and Celery Beat with Redis for Asynchronous Tasks
Celery allows Django to offload tasks to background worker processes, preventing
the main web application from becoming unresponsive during time-consuming
operations. Celery Beat schedules periodic tasks.21
● Rationale:
○ Asynchronous Operations: Tasks like sending emails, generating reports, or
complex data processing after an API call can be handled by Celery workers
without making the user wait.
○ Periodic Tasks: Celery Beat can schedule recurring tasks, such as database
cleanup, sending daily digests, or, relevant to this project, hard-deleting soft-
deleted records. The cleanup of soft-deleted records is particularly important
for maintaining database performance and managing storage in a system
that uses soft deletes for offline synchronization.22
● Setup 7:
1. Install Redis: Redis typically serves as the message broker (to queue tasks)
and can also be a result backend (to store task results). Install Redis server
(e.g., sudo apt-get install redis-server on Ubuntu 7).
2. Install Packages:
Bash
pip install celery[redis] django-celery-results django-celery-beat
(django-celery-results stores task results in the Django database, django-
celery-beat stores periodic task schedules in the database).
3. Create celery.py: In the Django project directory (e.g.,
core_project/celery.py):
Python
# core_project/celery.py
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
from django.conf import settings
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'core_project.settings')
app = Celery('core_project')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS) # Auto-discover
tasks.py in apps
__all__ = ('celery_app',)
5. Configure settings.py:
Python
# core_project/settings.py
CELERY_BROKER_URL = 'redis://localhost:6379/0' # Your Redis URL
CELERY_RESULT_BACKEND = 'django-db' # Using django-celery-results
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = 'UTC' # Or your project's timezone
# For django-celery-beat
CELERY_BEAT_SCHEDULER = 'django_celery_beat.schedulers:DatabaseScheduler'
6. Add to INSTALLED_APPS:
Python
INSTALLED_APPS = [
#...
'django_celery_results',
'django_celery_beat',
#...
]
7. Run Migrations: python manage.py migrate (creates tables for results and
beat schedules).
8. Define Tasks: Create tasks.py files within Django apps. Example:
Python
# api_app/tasks.py
from celery import shared_task
from django.utils import timezone
from datetime import timedelta
from.models import Note # Assuming Note model has is_deleted and deleted_at
@shared_task(name="cleanup_old_soft_deleted_notes")
def cleanup_old_soft_deleted_notes_task():
cutoff_date = timezone.now() - timedelta(days=30) # e.g., older than 30 days
# Assuming YourModel is the model using django-rest-offlinesync's TrackedModel
# or has similar is_deleted and deleted_at fields.
# For django-rest-offlinesync, the deletion timestamp might be on `updated_at`
# when `is_deleted` is true, or a dedicated `deleted_at` field.
# Adjust field names as per your actual model structure.
# If using django-rest-offlinesync's TrackedModel, it has `deleted_at`.
records_to_delete = Note.objects.filter(
is_deleted=True,
deleted_at__lte=cutoff_date # or updated_at__lte if that's your soft-delete
timestamp
)
count = records_to_delete.count()
records_to_delete.delete() # This performs a hard delete
return f"Hard-deleted {count} soft-deleted notes older than 30 days."
This task is crucial for systems employing soft deletes for synchronization, as
it prevents indefinite accumulation of logically deleted data in the database.22
9. Schedule Periodic Tasks: This can be done via the Django Admin interface
(if django_celery_beat is configured) or directly in settings.py for static
schedules:
Python
# core_project/settings.py
from celery.schedules import crontab
CELERY_BEAT_SCHEDULE = {
'hard-delete-old-notes-daily': {
'task': 'cleanup_old_soft_deleted_notes_task', # Name given in @shared_task or
discovered name
'schedule': crontab(hour=3, minute=0), # Run daily at 3 AM
# 'args': (16, 16) # If the task takes arguments
},
}
#[tauri::command]
fn encrypt_data_rust(data: Vec<u8>, passphrase: Option<String>) -> Result<Vec<u8>,
String> {
let key = Secret::new(passphrase.unwrap_or_else(|| "default-fallback-
key".to_string()).into_bytes());
let encryptor = Encryptor::with_user_passphrase(key);
#[tauri::command]
fn decrypt_data_rust(encrypted_data: Vec<u8>, passphrase: Option<String>) ->
Result<Vec<u8>, String> {
let key = Secret::new(passphrase.unwrap_or_else(|| "default-fallback-
key".to_string()).into_bytes());
let decryptor = Decryptor::Passphrase(key);
// #[tauri::command]
// fn save_to_keychain_rust(service_name: String, username: String, secret: String) -> Result<(),
String> {
// let entry = Entry::new(&service_name, &username);
// entry.set_password(&secret).map_err(|e| e.to_string())
// }
// #[tauri::command]
// fn get_from_keychain_rust(service_name: String, username: String) -> Result<String, String> {
// let entry = Entry::new(&service_name, &username);
// entry.get_password().map_err(|e| e.to_string())
// }
#
struct CalculationParams {
value1: f64,
value2: f64,
iterations: u32,
}
#
struct CalculationResult {
result: f64,
time_taken_ms: u128,
}
#[tauri::command]
async fn perform_complex_calculation(params: CalculationParams) ->
Result<CalculationResult, String> {
let start_time = std::time::Instant::now();
// Simulate a complex calculation
let mut temp_result = params.value1;
for _i in 0..params.iterations {
temp_result = (temp_result * params.value2).sin().acos().tan();
if temp_result.is_nan() { // Basic error check for demo
return Err("Calculation resulted in NaN".to_string());
}
}
let duration = start_time.elapsed();
Ok(CalculationResult {
result: temp_result,
time_taken_ms: duration.as_millis(),
})
}
● Managing State in Rust: Tauri allows managing shared state within Rust that
can be accessed by commands using tauri::State<T>.61 This is useful for holding
database connections, application-wide configurations, or any other shared
resources. Mutability is handled using interior mutability patterns like
std::sync::Mutex or tokio::sync::Mutex for async commands.61
Rust commands act as a secure and performant bridge, enabling the JavaScript
frontend to initiate sensitive or computationally intensive operations without directly
handling the underlying complexity or exposing sensitive data to the webview.
This Next.js build process is usually automated as part of the Tauri build flow by configuring
the beforeBuildCommand in tauri.conf.json.12
Example package.json script:
JSON
// package.json
"scripts": {
"dev": "next dev",
"build": "next build", // This command generates the static export to `out/`
"start": "next start",
"tauri": "tauri"
}
And in tauri.conf.json:
JSON
// src-tauri/tauri.conf.json
{
"build": {
"beforeBuildCommand": "npm run build", // Or pnpm build, yarn build
"beforeDevCommand": "npm run dev", // Or pnpm dev, yarn dev
"devUrl": "http://localhost:3000",
"frontendDist": "../out" // Relative to src-tauri, or absolute path
},
//...
}
This ensures that whenever tauri dev or tauri build is run, the Next.js frontend is
prepared correctly.
The Tauri build process leverages Rust's compilation capabilities and platform-
specific bundling tools to create optimized native applications. One of Tauri's
significant advantages is its small bundle size compared to alternatives like Electron,
as it uses the operating system's native webview instead of bundling its own browser
engine.2 This results in smaller downloads and installations for end-users.
This combination allows developers to build applications that are not only cross-
platform but also resilient to network interruptions, providing a seamless user
experience. The small bundle size offered by Tauri is a significant advantage for
distribution.
In conclusion, the described stack offers a compelling solution for modern application
development needs. By carefully addressing the complexities of offline
synchronization, security, and cross-platform consistency, developers can create
powerful and user-friendly applications that stand out in today's connected, yet often
intermittently connected, world.
Works cited