The mental model: Unreal has multiple serialization paths
The old version treated serialization like one giant bucket. In practice, Unreal has several related but different systems, and the right choice depends on why you are writing bytes.
Package / object serialization
Used by the engine to load assets, classes, defaults, and reflected object data.
Save data serialization
Used when you want durable runtime state on disk, usually via USaveGame or a custom archive.
Network serialization
Used when state crosses the network and every bit matters for bandwidth, determinism, and validation.
Delta / collection serialization
Used when arrays or state sets change over time and you only want to ship the diff.
FArchive sits at the center of most of these flows. It is the base archive type Unreal uses for reading and writing serialized data, and it exposes the common machinery used by object serialization, custom binary blobs, and many network-oriented helpers. FStructuredArchive builds on the same idea but gives you a structured API when you want something less brittle than a raw field order.
The important mindset shift is this: disk format, replication format, and editor/runtime object serialization are not the same optimization problem. Disk data usually cares about compatibility and readability of evolution. Network data cares about size, determinism, and replayability. Engine object serialization cares about reflection, references, and compatibility with the UObject ecosystem.
Pick the right tool before you write code
The biggest source of complexity is using a low-level solution when a higher-level one already fits the problem. This table is the fast way to decide.
| Use case | Best fit | Why |
|---|---|---|
| Simple persistent player/profile data | USaveGame | Lowest friction, integrates cleanly with engine helpers, good for slot-based saves. |
| Runtime data blob you fully own | FArchive + custom versioning | Lets you define exactly what goes in and how older data migrates forward. |
| Structured, named fields | FStructuredArchive | Better fit when you want a structured format instead of a purely positional stream. |
| Compact replicated structs | NetSerialize | Lets you quantize or pack bits for the network rather than shipping full-precision values. |
| Replicated arrays that change over time | FFastArraySerializer | Delta replication is usually far cheaper than resending the whole container. |
That last row matters a lot. A replicated inventory, status list, score feed, or ability handle list often looks small early on, but once a multiplayer project grows, “just replicate the whole array” becomes one of those decisions you regret everywhere else.
SaveGame done properly: use the high-level path first
If you are saving profile data, unlocked items, options, checkpoint state, or a small amount of runtime progress, USaveGame should be your default starting point. It is simple, readable, and easy to organize around slots.
USTRUCT(BlueprintType)
struct FInventorySaveEntry
{
GENERATED_BODY()
UPROPERTY(SaveGame)
FPrimaryAssetId ItemId;
UPROPERTY(SaveGame)
int32 Quantity = 0;
};
UCLASS()
class UMySaveGame : public USaveGame
{
GENERATED_BODY()
public:
UPROPERTY(SaveGame)
FString CurrentMapName;
UPROPERTY(SaveGame)
FVector PlayerLocation = FVector::ZeroVector;
UPROPERTY(SaveGame)
TArray Inventory;
};
void UMySaveSubsystem::WriteCurrentSave()
{
UMySaveGame* Save = Cast(
UGameplayStatics::CreateSaveGameObject(UMySaveGame::StaticClass()));
check(Save);
Save->CurrentMapName = GetWorld()->GetMapName();
Save->PlayerLocation = CachedPlayerLocation;
Save->Inventory = CachedInventory;
UGameplayStatics::AsyncSaveGameToSlot(
Save,
SlotName,
0,
FAsyncSaveGameToSlotDelegate::CreateUObject(
this, &ThisClass::OnSaveFinished));
}
The UPROPERTY(SaveGame) specifier is still worth using because it documents intent and plays nicely with custom save archives that check Ar.IsSaveGame(). The subtle part is that Epic’s own SaveGameToSlot documentation says the function writes all non-transient properties on the SaveGame object, and the SaveGame flag is not checked there. That catches a lot of people off guard.
So when should you leave USaveGame behind? Usually when you need one of these:
- a custom binary layout,
- explicit version migration logic,
- very strict control over which fields serialize,
- or a format shared with backend services, replay tools, or external pipelines.
Custom binary + versioning: the part most tutorials skip
Raw binary serialization is not hard. Evolving it safely is the hard part. If you ever change field order, add a member, or remove one, old data can become garbage unless you version the format deliberately.
namespace WeaponStateVersion
{
const FGuid GUID(0xD59E7D55, 0x44B34B89, 0xA50FA3C1, 0x6B228E30);
enum Type : int32
{
Initial = 0,
AddedMagazineAmmo,
Latest = AddedMagazineAmmo
};
const FCustomVersionRegistration Registration(
GUID,
Latest,
TEXT("WeaponStateVersion"));
}
USTRUCT()
struct FWeaponRuntimeState
{
GENERATED_BODY()
UPROPERTY()
float Heat = 0.0f;
UPROPERTY()
int32 MagazineAmmo = 0;
bool Serialize(FArchive& Ar)
{
Ar.UsingCustomVersion(WeaponStateVersion::GUID);
const int32 Version = Ar.CustomVer(WeaponStateVersion::GUID);
Ar << Heat;
if (Version >= WeaponStateVersion::AddedMagazineAmmo)
{
Ar << MagazineAmmo;
}
else if (Ar.IsLoading())
{
MagazineAmmo = 0;
}
return true;
}
};
template<>
struct TStructOpsTypeTraits
: public TStructOpsTypeTraitsBase2
{
enum
{
WithSerializer = true
};
};
That version registration gives you a stable branch point when the format changes. It is much safer than “just append the new field and hope all old files disappear soon.”
bool UMyPersistenceSubsystem::WriteWeaponState(
const FWeaponRuntimeState& InState,
TArray& OutBytes)
{
OutBytes.Reset();
FMemoryWriter Writer(OutBytes, true);
FWeaponRuntimeState MutableCopy = InState;
MutableCopy.Serialize(Writer);
return !Writer.IsError();
}
bool UMyPersistenceSubsystem::ReadWeaponState(
const TArray& InBytes,
FWeaponRuntimeState& OutState)
{
if (InBytes.IsEmpty())
{
return false;
}
FMemoryReader Reader(InBytes, true);
return OutState.Serialize(Reader) && !Reader.IsError();
}
One more design tip: when the layout starts needing names, nested sections, or better long-term maintenance, consider moving that particular format toward FStructuredArchive instead of leaving it as a purely positional byte stream. Positional streams are compact, but they are also easier to break accidentally.
Custom network serialization: compress what matters
Replication is serialization too, but you should think about it differently. On the network, precision is not free. If you replicate a struct 20 times per second to multiple clients, a few wasteful fields become an expensive system very quickly.
This is where NetSerialize earns its keep. It lets you explicitly quantize or pack data so you send the information you actually need, not the default in-memory representation of the struct.
USTRUCT()
struct FReplicatedAimData
{
GENERATED_BODY()
UPROPERTY()
FVector_NetQuantize10 TraceStart;
UPROPERTY()
uint16 PackedYaw = 0;
UPROPERTY()
int16 PackedPitch = 0;
bool NetSerialize(FArchive& Ar, UPackageMap* Map, bool& bOutSuccess)
{
bOutSuccess = true;
TraceStart.NetSerialize(Ar, Map, bOutSuccess);
Ar.SerializeBits(&PackedYaw, 16);
Ar.SerializeBits(&PackedPitch, 16);
return true;
}
void SetFromRotator(const FRotator& InRotation)
{
PackedYaw = FRotator::CompressAxisToShort(InRotation.Yaw);
PackedPitch = FRotator::CompressAxisToShort(InRotation.Pitch);
}
FRotator ToRotator() const
{
return FRotator(
FRotator::DecompressAxisFromShort(PackedPitch),
FRotator::DecompressAxisFromShort(PackedYaw),
0.0f);
}
};
template<>
struct TStructOpsTypeTraits
: public TStructOpsTypeTraitsBase2
{
enum
{
WithNetSerializer = true
};
};
That example shows the core pattern:
- quantized vectors instead of raw FVector,
- compressed angles instead of full floats,
- and a tiny serialization function that stays aligned with gameplay needs.
This is also why it is useful to study engine types. For example, Epic’s API docs show that FHitResult advertises WithNetSerializer = true. Engine types that cross the network a lot usually already solve some of the packing problem for you.
What usually does not belong here? Big ownership graphs, arbitrary UObject blobs, or state that should really be replicated through normal reflected properties and engine-managed object references.
Replicating collections efficiently with Fast Arrays
Replicated arrays are where networking costs quietly hide. If a list changes one item at a time but you resend the whole thing every update, you are paying for far more data than the player actually changed.
USTRUCT()
struct FInventoryEntry : public FFastArraySerializerItem
{
GENERATED_BODY()
UPROPERTY()
FName ItemId;
UPROPERTY()
int32 Quantity = 0;
};
USTRUCT()
struct FInventoryList : public FFastArraySerializer
{
GENERATED_BODY()
UPROPERTY()
TArray Entries;
bool NetDeltaSerialize(FNetDeltaSerializeInfo& DeltaParms)
{
return FFastArraySerializer::FastArrayDeltaSerialize<
FInventoryEntry,
FInventoryList>(Entries, DeltaParms, *this);
}
void AddOrUpdate(const FName ItemId, const int32 Delta)
{
for (FInventoryEntry& Entry : Entries)
{
if (Entry.ItemId == ItemId)
{
Entry.Quantity += Delta;
MarkItemDirty(Entry);
return;
}
}
FInventoryEntry& NewEntry = Entries.AddDefaulted_GetRef();
NewEntry.ItemId = ItemId;
NewEntry.Quantity = Delta;
MarkItemDirty(NewEntry);
}
void RemoveAt(const int32 Index)
{
Entries.RemoveAt(Index);
MarkArrayDirty();
}
};
template<>
struct TStructOpsTypeTraits
: public TStructOpsTypeTraitsBase2
{
enum
{
WithNetDeltaSerializer = true
};
};
The three operations that matter most are easy to remember:
- MarkItemDirty when one element changed,
- MarkArrayDirty when array structure changed,
- and keep the item struct focused so the diff stays small.
If you are moving to Iris, there is one extra footnote worth knowing. Epic’s FIrisFastArraySerializer docs explicitly note that it does not support local, non-replicated items living in the same array. That means client-only decorative entries or mixed local/replicated bookkeeping should usually live elsewhere.
Common mistakes that make serialization fragile
These are the mistakes that show up again and again in real projects:
- Using full-precision network data everywhere. It works early, then becomes expensive later.
- No versioning on custom disk formats. The first schema change turns old data into a migration problem.
- Treating SaveGame, replication, and object serialization as interchangeable. They are related, not identical.
- Replicating large arrays naïvely. Fast Arrays usually exist for a reason.
- Serializing object graphs you do not truly own. References, soft references, and asset IDs are often safer than raw object state dumps.
Best default for simple persistence
Use USaveGame and keep the data model explicit.
Best default for bandwidth-sensitive structs
Use NetSerialize and quantize deliberately.
Best default for mutable replicated lists
Use FFastArraySerializer.
Best default for evolving binary blobs
Add custom version registration before the first format change, not after.
If you keep those boundaries clear, serialization stops feeling like “one mysterious engine feature” and becomes a set of deliberate tools you can combine safely.
References and further reading
These were the main official docs used to refresh the article and keep the examples aligned with current Unreal Engine guidance.