I don’t think this is a good way forward.
SQLite does not have a date/time storage type. Instead you can use a text field
(ISO, with/without UTC offset), a real (JDN) or an integer (Unix ticks):
Only the text field format can have a timezone associated with the value, which
is then taken into account by the SQLite date/time functions, but in most cases,
the calculations are then done using UTC. SQLite assumes that date/time string
values without offset are UTC times and returns UTC for its date/time functions:
but it doesn’t even use a UTC offset in cases where it should (e.g. when using
The problem with all this is that data read into a database will often not carry
timezone information. It can be local time, UTC or some other time zone, which
can only be deducted from related context (e.g. origin of data, source
conventions, separate database fields, etc.).
The naive datetime type is meant for exactly those cases.
Now, you could argue that having an offset is an indicator that a specific
timezone is given, which is correct, but what do you then do with time values
which do not carry this extra information ? The only correct way to go about it
is to return a naive datetime instance in those cases, creating issues when the
data has mixes of values stored with and without time offset.
IMO, the decision should be left to the application. The current default is the
most correct in terms of not creating problems downstream. Had there been major
problems with the default, this would have been raised many years ago.
People who do need the UTC offset interpretation can register a different
converter and apply extra logic based on the application context to fill in the
gaps for values which do not have a UTC offset in the database.
PS: Best practice is to always store datetime values as UTC in a database and
then use a separate field to hold the origin timezone string / location (where
needed). This causes the least surprises and errors. It can also be faster,
depending on how the database stores these values.