Introducing Dolce – A Python Library for AI Development

Hi, I’m not sure if I’m allowed to post about personal projects, but I wanted to share something I’ve been working on.

I’ve created Dolce, a Python library designed to help developers create AI more easily and efficiently. It’s still in early development, and it’s open-source under the MIT License, so you’re free to use, modify, and contribute to it.

If you’re interested, you can check it out here:
GitHub: GitHub - kon1313/Dolce-Py

I’d love to hear any feedback or suggestions from the community!

Sweet project.

1 Like

Badumtish.

2 Likes

Can you explain what this project is doing? I’ve had a look at the code and the train_model function seems to be producing random values instead of actual training.

          self.logger.info(f"Starting model training with {len(data)} samples")
          losses = []
          accuracies = []
          
          for epoch in range(epochs):
              batches = [data[i:i + batch_size] for i in range(0, len(data), batch_size)]
              epoch_loss = 0
              
              for batch in batches:
                  time.sleep(0.05) 
                  batch_loss = random.uniform(0.1, 1.0) / (epoch + 1)
                  epoch_loss += batch_loss
              
              epoch_loss /= len(batches)
              losses.append(epoch_loss)
              accuracies.append(min(95, 70 + epoch * 2.5))
              self.logger.debug(f"Epoch {epoch+1}/{epochs} - Loss: {epoch_loss:.4f}")

It is also reading some pickle files so I would be wary of executing the demo.

Thanks for checking out Dolce and for your thoughtful questions!

You’re absolutely right about the train_model function it’s not performing actual machine learning training. Dolce is designed as an easy-to-use library for beginners and developers who want to experiment with AI/LLM concepts without diving into heavy frameworks like TensorFlow or PyTorch. The train_model function is a simulation that generates plausible training metrics (loss and accuracy) to demonstrate how a training loop might work. It uses random.uniform() for loss and a simple formula for accuracy to keep things lightweight and accessible. The input data is currently just used to define batch sizes, not for real training I wanted to focus on usability over complexity for this initial version. That said, your observation is spot on, and I’m considering adding an optional real training mode (e.g., with a simple neural network) in a future update. Any suggestions on what you’d like to see there?

Regarding the pickle files, I appreciate the caution! The demo uses save_model and load_model to simulate model persistence, storing and reading state with Python’s pickle. In this case, the .dolce files (like demo_model.dolce) are generated locally by the demo itself, so they’re safe to run as long as you’re executing it from the source I provided. I totally get the security concern with untrusted pickle files, though it’s a good point! I’ll add a note in the README to clarify this and might explore safer serialization options (like JSON) for broader use.

Dolce’s goal is to provide a simple, extensible base for AI experimentation things like text generation, data analysis, and a chatbot all wrapped in an approachable package. I’d love to hear your thoughts on how it could be improved or what features you’d find useful!

To foster discussion on your github project I suggest you activate the discussions tab there.

Hi Mark, thank you for your suggestion, I have enabled the discussion page.