-
Notifications
You must be signed in to change notification settings - Fork 27
Error when installing logstash-input-dynamodb plugin #12
Comments
This is because of #9 |
Thank you for the response. I am now getting this error. I used this to get this far https://github.com/genesi/logstash-input-dynamodb/tree/b1be5b5e848d53e5e10a5b72740fc4bbf9c92e94 Thanks for your help so far. Plugin version conflict, aborting In Gemfile:
|
I'm getting the same problem, using jruby 1.7.20 (1.9.3p551), java version "1.8.0_66", logstash 2.2.0. I've tried using the master version of the plugin, as well as the version forked at genesi by @ahmedammar . I'm getting the exact same errors as @bretd25 Is there a simple way to get around this? Should I just use an older version of logstash until this plugin is working with the latest? |
@brett--anderson I ended up getting the plugin installed and working even with these errors. I looked at this https://hub.docker.com/r/mantika/logstash-dynamodb-streams/~/dockerfile/ I ran this gem install logstash-input-dynamodb logstash-filter-dynamodb then i ran plugin install logstash-input-dynamodb logstash-filter-dynamodb Then i checked the list of plugins and I noticed the plugin was now in the list so i configured it and everything i working. Hope this helps |
Thanks @bretd25. I'm using ElasticSearch Service on AWS which is still using version 1.5.x under the hood. This means I don't need to use LogStash 2.x and when I tried setting things up with an < 2.0 version of LogStash I got everything working. I'll revisit this when AWS supports ES >= 2.0 and I need to update LogStash and the DynamoDB plugin |
@bretd25 and @ahmedammar, now that I've got this working, the raw messages from the DynamoDB stream are just being stored in ES. Do either of you know a good resource that covers setting up a LogStash filter so that each column in the original DynamoDB table is a field in the ES index? |
@bretd25 Thanks for that, I implemented that filter and my table columns are now searchable document fields in the index. I have however noticed that when I restart Logstash it re-adds all the records again. I was hoping some magic would only insert changed data (and also remove and update records as needed). I'd really appreciate it if you let me know how you managed to get that to work? |
Use the perform_scan under dynamodb dynamodb { dynamodb { Bret |
@bretd25 Thanks again for your help. I got the impression that you could use Logstash to synchronise your DynamoDB and Elastic Search cluster. If I remove an item from DynamoDB, it adds an entry to ES, detailing the remove, but I expect it to actually remove the item from ES. Likewise if I stop Logstash, add a single record in DynamoDB, then restart Logstash, I would expect it to add that single record, and not scan the entire Database, adding all the records as duplicates. Do you know if Logstash is actually a good fit for my intended scenario? I got the impression it was from the press releases from AWS, but none of their tutorials seem to go far enough to actually provide this synchronisation I require, they just log all your DynamoDB activity. Do I need to explicitly define the primary key or something so that Logstash better understand uniqueness? Or do I need to use a custom output plugin that will handle updates and deletes in the ES cluster correctly? |
@brett--anderson the current ES output plugin supports deleting documents, but you need to specifically configure the https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html I haven't done this myself but it shouldn't be too complicated. |
We do not delete any documents yet so i cannot comment on how to do it. I may need to figure this out myself though after my next meeting. If you figure it out i would love an update or if i figure out first i will update the thread. |
I changed my output to set a document id:
Here 'item_uuid' is the name of my primary key in DynamoDB. The table I'm tracking only has a hash key and no range key so this is fairly straight forward. This stopped duplicates being entered into ES even if the 'perform_scan' option is enabled. Although this stopped the duplicates, it's still logging the remove operations and presumably it would do the same for updates. I think I need to change the output definition to be conditional on the event type, and then use the correct action on the index accordingly. I believe this is what @marcosnils was referring to. I'll try and get that working and update this thread if I figure it out. Likewise, if you figure this out first or know a better way to do it, please update the thread. |
@brett--anderson the default |
Thanks @marcosnils, I see the action parameter in that link you provided. I'm just trying to figure out how to conditionally set it, since some items may require action to be delete, while others require index. |
This is almost working. No duplicate documents, single deletes are propagating across to the index. I'm using Logstash 1.5.6 as I'm still having no luck with 2.2 ( @bretd25 you mentioned you got it working based on that docker file, but as far as I can tell it's specifying logstash 1.5.6?) Although the below configuration works for single deletes, if I try and delete 3 or 4 items at a time, only around half are removed from the index. I don't know if this is something to do with trying to bulk delete over http with 1.5.6 or just a miss configuration. Anyway, it's getting closer to what I want...
|
@brett--anderson you won't be able to run it in logstash 2.x beacuse the dynamo-filter plugin is not compatible with that version yet. I'm the author of the original plugin and I don't have the time at this moment to update it. @bretd25 told me he was very close to get the filter plugin working in 2.x, so I guess we should wait from him. |
@Marcos I do have the filter working in 2.x. Works nicely. |
@bretd25 amazing. Any changes you can submit a PR so I can upload the new gem? |
@Marcos I made no changes. Just got it installed. I have to create another environment so I will document what I did to get it installed. |
@bretd25 amazing. We can go ahead and create a logsts 2.x docker image with both plugins installed now :D |
@marcosnils I'm interested to know is this docker image public? @bretd25 I'm guessing the 2.0 support I added for input is working for you? |
Yeah, you can find the current version here: On Fri, Feb 19, 2016 at 6:33 AM, Ahmed Ammar [email protected]
|
@ahmedammar Yes sorry for the delayed response. |
@marcosnils Logstash runs great if i manually run it. I am trying to get it to run as a service and i cannot get past this error message. Sending logstash logs to /etc/logstash/logs//logstash.log.
no such file to load -- com/amazonaws/aws-java-sdk-elasticbeanstalk/1.10.11/aws-java-sdk-elasticbeanstalk-1.10.11 (LoadError) Have you seen this error before? Bret |
Hi @marcosnils, any chance you could push that 2.x Logstash / DynamoDB version to Docker Hub (or even push it to a branch of the associated git repo)? I'm about to try and get it working in Docker myself but I'd hate to waste time reinventing the wheel if you've already done this. |
I'll push it later today sent from mobile
|
@marcosnils Thanks! I really appreciate your work on these plugins. I noticed you've pushed a version 1 specific Docker file to Docker Hub. It also looks like you are just in the process of deploying version 2.x Gems, is this in preparation for a version 2 Docker file? If you are actively developing these, perhaps I could throw in another question... When I try and run the :1 Docker file with a local version of DynamoDB, I receive a "Cannot do operations on a non-existent table" error. I'm using the same access_key_id that I specified when I created the tables, and indeed using the local DynamoDB shell with that same access_key_id describes the table in question without any problems. Just in case there was still an access key id issue I tried running the local DynamoDB instance with the -sharedDb option. When I tried to run the Docker file again I got a different error: "No region found with any service for endpoint http://192.168.59.103:8000/" (Note that I'm developing in OSX using docker-osx-dev, so the containers are running inside a VM, thus the 192.~ IP address and not localhost). Again, using these same endpoints in a Python Notebook or DynamoDB shell have no problem connecting. Does the Docker image work with a local version of DynamoDB for you? If so, any idea why I am seeing these errors? As far as I can tell the local DynamoDB instance creates the streams locally as well, as I'm able to include a stream specification when I create the table and I'm also able to then list the stream for that table and describe it as well. If you haven't tested it locally, any chance you could try it out, in case its something that can be fixed quickly? Sorry to ask so much, unfortunately I don't have the time to figure out how to setup and debug the Ruby components locally right now, so if it's relatively quick for you to do, I'd be forever grateful. |
@brett--anderson In order to make it work with DynamoDB local you need to do some logstash configuration first. Check #10 for more details. Regarding version upgrade, I'm struggling with different jruby / logstash / dependencies version in order to get everything on track for version 2. I've taggeed the :1 version which is the current working version with logstash 1.5.6. Regarding the non existing table issue, I'm not sure why it might be happening. I'm using the :1 docker image in prod without any problems. Check if you can make it work locally with the fix I've posted before and we can see from there. |
@marcosnils Amazing! The solution for #10 worked perfectly! For the record I ran the command:
Also for the record, I used the following regions.xml file:
I'm happy working with the :1 version for the moment since it works in all cases except a multi-delete which I mentioned earlier. I'll certainly upgrade to the :2 image when it become available though. Thanks again! |
Happy to merge PR's 😄 . 2.0 is giving me a headache as it's a pain in the ass to plugins without deploying them to rubygems (logstash sucks for this). Hope to get it ready ASAP. |
@brett--anderson @ahmedammar @bretd25 the docker image is ready: |
Having a problem installing the plugin on a fresh system.
Installing logstash-input-dynamodb
Plugin version conflict, aborting
ERROR: Installation Aborted, message: Bundler could not find compatible versions for gem "logstash-core":
In snapshot (Gemfile.lock):
logstash-core (= 2.2.0)
In Gemfile:
logstash-codec-cloudtrail (>= 0) java depends on
logstash-codec-spool (>= 0) java depends on
logstash-core (< 3.0.0, >= 2.0.0.beta2) java
logstash-codec-cloudtrail (>= 0) java depends on
logstash-codec-spool (>= 0) java depends on
logstash-core (< 3.0.0, >= 2.0.0.beta2) java
logstash-codec-nmap (>= 0) java depends on
logstash-core (< 3.0.0, >= 2.0.0) java
logstash-codec-cloudtrail (>= 0) java depends on
logstash-codec-spool (>= 0) java depends on
logstash-core (< 3.0.0, >= 2.0.0.beta2) java
logstash-codec-cloudtrail (>= 0) java depends on
logstash-codec-spool (>= 0) java depends on
logstash-core (< 3.0.0, >= 2.0.0.beta2) java
logstash-codec-cloudtrail (>= 0) java depends on
logstash-codec-spool (>= 0) java depends on
logstash-core (< 3.0.0, >= 2.0.0.beta2) java
The text was updated successfully, but these errors were encountered: