Integrate Semantic Kernel services across local and remote providers using a unified codebase. Seamlessly switch between providers like Ollama, phi3.5, llama3.2, Azure, OpenAI, Google Gemini, and Groq by simply changing the service name. Leverage function calling and memory with all supported models.
- About the Project
- Features
- Getting Started
- Usage
- Roadmap
- Contributing
- License
- Contact
- Acknowledgments
This project showcases how to use Microsoft's Semantic Kernel with both local (Ollama, phi3.5, llama3.2) and remote services (Azure, OpenAI, Google Gemini, Groq) using the same codebase. By simply changing the service name, you can switch between different AI providers without altering your core logic.
- Unified Kernel Creation: Integrate all Semantic Kernel services using a single kernel setup.
- Multi-Provider Support: Seamlessly switch between local and remote AI models.
- Function Calling: Utilize function calling capabilities across all supported models.
- Memory Integration: Implement memory functions with ease.
- Extensible Architecture: Easily expand to support more providers and functionalities.
- Development Environment: Visual Studio or Visual Studio Code
- .NET SDK: Ensure you have the latest .NET SDK installed
- API Keys: Obtain API keys for the services you wish to use
- Clone the Repository
git clone https://github.com/peopleworks/SemanticKernelFromLocalToCloud.git
- Open the Project
- Navigate to the project directory and open it in Visual Studio or Visual Studio Code.
- Update API Keys
- Locate the
appsettings.json
file. - Update the
ApiKeys
section with your respective API keys.
- Locate the
- Run the Application
- Use your IDE's build and run feature to start the application.
- Switch Providers
- Change the service name in the configuration to switch between different AI models.
- Function Calling
- Utilize the function calling feature as documented in Semantic Kernel.
- Memory Features
- Implement and test memory functionalities across different models.
- Upcoming Features
- Adding Retrieval-Augmented Generation (RAG) for both local and remote models.
- Implementing additional functions and enhancements in the next version.
Stay tuned for more exciting updates!
Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
Distributed under the MIT License. See LICENSE
file for more information.
- Email: [email protected]
- Microsoft® Semantic Kernel Team: For their outstanding work.
- Ollama: For providing an excellent tool.
- Google: For their fast and well-crafted models.
- Groq: For being a part of the developer community.
- Meta: For supporting open-source initiatives.
This README was generated to provide a comprehensive overview of the project, aiming to assist developers and contributors in understanding and utilizing the Semantic Kernel across various platforms.