Quantization Kernels in lower precision

ctranslate2 currently only supports 8-bit integer quantization (INT8) and 16-bit floating point (FP16/BF16), and does not yet provide native support for 4-bit quantization. I would like to know if the developers of ctranslate2 will continue to develop for lower-bit quantization operators in the future? At the same time, I also want to develop these operators.