I'm currently attempting to build in a basic plugin system into my application. I'd ideally like to know nothing about the plugin's class information, so I'm making use of the base Plugin class when grabbing the appropriate memory management functions, like so:
void* handle = nullptr;
if (!(handle = dlopen(path.c_str(), RTLD_LOCAL | RTLD_NOW))) {
throw std::runtime_error("Failed to load library: " + path);
}
using allocClass = Plugin *(*)();
using deleteClass = void (*)(Plugin *);
auto allocFunc = reinterpret_cast<allocClass>(
dlsym(handle, allocClassSymbol.c_str()));
auto deleteFunc = reinterpret_cast<deleteClass>(
dlsym(handle, deleteClassSymbol.c_str()));
if (!allocFunc || !deleteFunc) {
throw std::runtime_error("Allocator or deleter not found");
}
return std::shared_ptr<Plugin>(
allocFunc(),
[deleteFunc](Plugin *p){ deleteFunc(p); }
);
The alloc/delete functions on the plugin side basically just call new or delete, e.g.:
extern "C" {
TestPlugin *allocator() {
return new TestPlugin();
}
void deleter(TestPlugin *ptr) {
delete ptr;
}
}
My question is basically about the safety of this type mismatch - with the plugin using its own type, but the loader declaring it as the base type. From limited testing nothing appears to go wrong, but I'm not sure if under the hood it's deleting the slice of memory for the Plugin part and not the derived part.
Are there better ways to go about this without the application importing each plugin's headers?