Outputs & Artifacts
Rassket generates comprehensive outputs and artifacts that are ready for production use, decision-making, and sharing with stakeholders. This page details what you get and how to use it.
Models
Trained Model Files
Each trained model includes:
- Model Binary: Serialized model file (pickle format)
- Model Metadata: Problem type, feature names, target column, domain
- Preprocessing Pipeline: Feature engineering and transformation steps
- Model Configuration: Hyperparameters and training configuration
Model Information
- Model ID: Unique identifier for your model
- Model Type: Algorithm used (XGBoost, LightGBM, etc.)
- Problem Type: Regression or Classification
- Training Date: When the model was trained
- Performance Metrics: Evaluation metrics on test set
Model Comparison
When multiple models are trained, you receive:
- Comparison Table: Side-by-side performance metrics
- Best Model Identification: Top performer highlighted
- Individual Model Details: Full information for each model
Metrics
Regression Metrics
- R² (R-squared): Proportion of variance explained (0-1, higher is better)
- RMSE: Root Mean Squared Error (lower is better)
- MAE: Mean Absolute Error (lower is better)
- MSE: Mean Squared Error (lower is better)
Classification Metrics
- Accuracy: Overall correctness (0-1, higher is better)
- Precision: True positives / (True positives + False positives)
- Recall: True positives / (True positives + False negatives)
- F1 Score: Harmonic mean of precision and recall
- ROC-AUC: Area under ROC curve (for binary classification)
Comprehensive Metrics
Beyond basic metrics, Rassket provides:
- Train/Validation/Test Metrics: Performance on each split
- Cross-Validation Scores: Robust performance estimates
- Confidence Intervals: Uncertainty estimates
- Statistical Tests: Significance testing (when applicable)
Forecasts
Prediction Interface
The Results page includes a prediction interface where you can:
- Input feature values manually
- Get instant predictions
- See prediction confidence (for classification)
Batch Predictions
Model packages include code for making batch predictions on new data:
- Load model and preprocessing pipeline
- Process new data through pipeline
- Generate predictions
- Export results
Forecast Quality
Forecasts include:
- Point Predictions: Single predicted value
- Confidence Intervals: Uncertainty ranges (when available)
- Feature Contributions: SHAP values showing feature impact
Explanations
Feature Importance
- SHAP Values: Unified measure of feature importance
- Global Importance: Overall feature contributions
- Local Importance: Feature contributions for individual predictions
- Visualizations: Interactive charts showing feature importance
AI-Powered Insights
- Performance Summary: Plain-language explanation of model performance
- Feature Insights: Which features matter most and why
- Model Reliability: Assessment of model confidence and limitations
- Real-World Implications: How to use predictions in decision-making
Domain-Aware Explanations
When a domain is selected, explanations include:
- Domain-specific terminology
- Context-appropriate interpretations
- Industry-relevant insights
- Actionable recommendations
Reports
PDF Executive Reports
Exportable PDF reports include:
- Executive Summary: High-level overview of results
- Model Performance: Comprehensive metrics and visualizations
- Feature Analysis: Feature importance and insights
- Visualizations: Key charts and diagnostics
- Domain Insights: Domain-aware analysis (if domain selected)
- Recommendations: Actionable next steps
Model Packages (ZIP)
Exportable model packages include:
- Trained Model: Serialized model file
- Preprocessing Pipeline: Feature engineering code
- Inference Script: Code for making predictions
- Requirements: Python dependencies list
- Documentation: Model documentation and usage guide
- Metadata: Model configuration and training details
Export Formats: PDF reports are perfect for presentations and documentation. Model packages are ready for production deployment and integration into your systems.
Visualizations
Default Visualizations
- Feature Importance: Bar chart of feature contributions
- Confusion Matrix: Classification accuracy breakdown
- ROC Curve: Classification performance visualization
- Prediction Distribution: Histogram of predicted values
- Metrics Charts: Visual representation of evaluation metrics
- Correlation Heatmap: Feature correlation matrix
Advanced Diagnostics
- Residual Diagnostics: Q-Q plots, residuals vs fitted (regression)
- R² Indicators: Goodness-of-fit visualizations (regression)
- Calibration Curve: Prediction probability calibration (classification)
- Outcomes Summary: Pie chart of prediction outcomes
Visualization Format
- Interactive Charts: Plotly-based interactive visualizations
- Exportable: Can be exported as images
- Embedded in Reports: Included in PDF reports
How Outputs Are Used
In Production
- Model Deployment: Use model packages to deploy models to production
- API Integration: Integrate models into APIs and services
- Batch Processing: Run batch predictions on new data
- Real-Time Predictions: Make predictions in real-time applications
In Decision-Making
- Executive Reports: Share PDF reports with decision-makers
- Insights: Use AI-powered insights to inform decisions
- Feature Analysis: Understand what drives predictions
- Forecasts: Use predictions to plan and optimize
In Communication
- Stakeholder Presentations: Use PDF reports for presentations
- Documentation: Include reports in project documentation
- Collaboration: Share visualizations and insights with teams
- Reporting: Include metrics and forecasts in regular reports
Export Options
Export Model Package
Click "Export Model Package" to download a ZIP file containing:
- Everything needed to deploy and use the model
- Preprocessing pipeline
- Inference code
- Documentation
Export PDF Report
Click "Export PDF Report" to generate a comprehensive PDF including:
- Executive summary
- All visualizations
- Metrics and analysis
- Domain-aware insights
Data Privacy and Handling
Data Storage
- Uploaded files are stored securely
- Models are stored with unique identifiers
- Data is processed according to your domain selection
Export Security
- Model packages contain no raw data
- Only model files and preprocessing pipelines are exported
- Reports contain aggregated metrics and visualizations
Privacy: Rassket processes your data to train models but does not retain raw data in exported packages. Only model artifacts and aggregated insights are exported.
Next Steps
Now that you understand outputs and artifacts:
- See the Dashboard Walkthrough to see outputs in context
- Explore Use Cases to see how outputs are used in practice
- Check the FAQ for common questions about outputs