Changes:
1) `#include <boost/pfr...` is now implicitly does `import boost.pfr` if the modules are supported
2) CI now tests modules on Ubuntu 24.04 with existing runtime tests
3) Renamed module to `boost.pfr`
4) CMakeLists.txt now uses modules for `Boost::pfr` target if modules are supported
5) All the library internals now have unconditional module level linkage. `1)` allows users to mix `#include <boost/pfr...` and `import boost.pfr` in user code without ODR-violations.
Significant differences from https://anarthal.github.io/cppblog/modules3:
* PFR uses a `BOOST_PFR_USE_STD_MODULE` macro for `import std;` / `includes` while building module. This allows to use `boost.pfr` module in C++20 and even without usable `std` module.
* Start upper bound fields search from `4` fields, to avoid slow startup on typical workloads
* Inline the `fields_count_binary_search_unbounded` function to reduce template instantiations depth by 1
* Renamed `min` to `min_of_size_t` to avoid weired syntax
* Applied idea of better error reporting from #120
* Do not start fields count computation if one of the static asserts failed. That speedups error reporting in edge cases
* Use `std::*_t` versions of traits as they are faster in some implementations
* Rewrite binary search to simplify it and to avoid degradation to linear search on types that have constructor from variadic pack
* Remove default template parameters to simplify code
As a result, the whole test suite now runs 10%-25% faster on MSVC, ~20% faster on Clang, and 7%-20% faster on GCC.
The tightest upper bound one can specify on the number of fields in a
struct is `sizeof(type) * CHAR_BIT`. So this was previously used when
performing a binary search for the field count. This upper bound is
extremely loose when considering a typical large struct, which is more
likely to contain a relatively small number of relatively large fields
rather than the other way around. The binary search range being multiple
orders of magnitude larger than necessary wouldn't have been a
significant issue if each test was cheap, but they're not. Testing a
field count of N costs O(N) memory and time. As a result, the initial
few steps of the binary search may be prohibitively expensive.
The primary optimization introduced by these changes is to use unbounded
binary search, a.k.a. exponential search, instead of the typically
loosely bounded binary search. This produces a tight upper bound (within
2x) on the field count to then perform the binary search with.
As an upside of this change, the compiler-specific limit placed on the
upper bound on the field count to stay within compiler limits could be
removed.