[2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [input chunk] update output instances with new chunk size diff=650 could not pack/validate JSON response #1679 - Github Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. When running the Loki 1.2.0 Docker image, Loki is reporting that it can't write chunks to disk because there is "no space left on device", although there appears to be plenty of space. Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [http_client] not using http_proxy for header "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"ZOMnun8BI6SaBP9lLtm1","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=69464185 watch_fd=13 Fri, Mar 25 2022 3:08:46 pm | [2022/03/25 07:08:46] [debug] [outputes.0] task_id=12 assigned to thread #1 Bug Report Describe the bug When Fluent Bit 1.8.9 first restarts to apply configuration changes, we are seeing spamming errors in the log like: [2021/10/30 02:47:00] [ warn] [engine] failed to flush chunk '2372-1635562009.567200761.flb',. [2022/03/24 04:19:24] [debug] [out coro] cb_destroy coro_id=0 Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY For example, the figure below shows when the chunks (timekey: 3600) will be flushed actually, for sample timekey_wait values: Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [out coro] cb_destroy coro_id=18 [2022/03/24 04:20:26] [ warn] [engine] failed to flush chunk '1-1648095560.297175793.flb', retry in 161 seconds: task_id=2, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [outputes.0] task_id=18 assigned to thread #1 Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [outputes.0] task_id=1 assigned to thread #0 Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [task] created task=0x7ff2f1839d00 id=7 OK [2022/03/24 04:19:54] [ warn] [engine] failed to flush chunk '1-1648095560.205735907.flb', retry in 40 seconds: task_id=0, input=tail.0 > output=es.0 (out_id=0) Loki crashes when the storage is full #2314 - Github Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [task] created task=0x7ff2f183a660 id=12 OK Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [input chunk] update output instances with new chunk size diff=695 [2022/03/24 04:20:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/fluent-bit-9hwpg_logging_fluent-bit-a7e85dd8e51db82db787e3386358a885ccff94c3411c8ba80a9a71598c01f387.log, inode 35353988 Trace logging is enabled but there is no log entry to help me further. Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [input chunk] update output instances with new chunk size diff=695 [1.7] Fails to send data to ElasticSearch #3052 - Github Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [input chunk] update output instances with new chunk size diff=656 I am using aws firelens logging driver and fluentbit as log router, I followed Elastic Cloud's documentation and everything seemed to be pretty straightforward, but it just doesn't work. Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [ warn] [engine] failed to flush chunk '1-1648192100.653122953.flb', retry in 11 seconds: task_id=3, input=tail.0 > output=es.0 (out_id=0) retry_time=29 next_retry_seconds=2021-04-26 15:58:43 +0000 chunk . Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 is now available Graylog works fine. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. fluent-bit crashes when using azure blob output #2839 - Github "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"0uMnun8BI6SaBP9lo-jn","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. [2022/03/24 04:21:20] [debug] [input:tail:tail.0] inode=1772851 with offset=0 appended as /var/log/containers/hello-world-89knq_argo_wait-a7f77229883282b7aebce253b8c371dd28e0606575ded307669b43b272d9a2f4.log Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [input chunk] update output instances with new chunk size diff=695 Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [upstream] KA connection #36 to 10.3.4.84:9200 is now available [ warn] [engine] failed to flush chunk '16225-1622284700.63299738.flb', retry in X seconds: task_id=X and so on. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Name es Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 has been assigned (recycled) Host 10.3.4.84 Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [outputes.0] task_id=8 assigned to thread #0 Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [upstream] KA connection #36 to 10.3.4.84:9200 is now available Fri, Mar 25 2022 3:08:22 pm | [2022/03/25 07:08:22] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [outputes.0] task_id=19 assigned to thread #0 The script itself works great, however this container configuration didn't work for me, the problem for me was that I have loki's psp enabled (enforcing non-root execution) which cauesed crond to fail (it must elevate to run the job) I ended up using a simple while loop that wraps the script execution with a 1m sleep and that did the trick. Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [retry] re-using retry for task_id=14 attempts=2 Fri, Mar 25 2022 3:08:37 pm | [2022/03/25 07:08:37] [debug] [http_client] not using http_proxy for header Version used: helm-charts-fluent-bit-0.19.19. Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=104048905 file has been deleted: /var/log/containers/hello-world-ctlp5_argo_wait-f817c7cb9f30a0ba99fb3976757b495771f6d8f23e1ae5474ef191a309db70fc.log Fri, Mar 25 2022 3:08:23 pm | [2022/03/25 07:08:23] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:42 pm | [2022/03/25 07:08:42] [debug] [upstream] KA connection #35 to 10.3.4.84:9200 is now available Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"H-Mmun8BI6SaBP9lh4vZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. From fluent-bit to es: [ warn] [engine] failed to flush chunk, https://github.com/fluent/fluent-bit/issues/4386.you. elasticsearch - failed to flush the buffer fluentd - Stack Overflow Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. [2022/03/24 04:19:24] [debug] [retry] new retry created for task_id=1 attempts=1 Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [out coro] cb_destroy coro_id=14 Fri, Mar 25 2022 3:08:23 pm | [2022/03/25 07:08:23] [debug] [retry] new retry created for task_id=5 attempts=1 Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Match kube. [2022/03/24 04:20:06] [ warn] [engine] failed to flush chunk '1-1648095560.254537600.flb', retry in 60 seconds: task_id=1, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [http_client] not using http_proxy for header Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 is now available "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"KeMoun8BI6SaBP9lIP7t","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamicall Fri, Mar 25 2022 3:08:23 pm | [2022/03/25 07:08:23] [debug] [outputes.0] task_id=5 assigned to thread #1 Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [task] created task=0x7ff2f183b1a0 id=18 OK Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [outputes.0] HTTP Status=200 URI=/_bulk "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"YOMnun8BI6SaBP9lLtm1","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 has been assigned (recycled) Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. [2022/03/24 04:21:20] [debug] [input:tail:tail.0] purge: monitored file has been deleted: /var/log/containers/hello-world-dsxks_argo_main-3bba9f6587b663e2ec8fde9f40424e43ccf8783cf5eafafc64486d405304f470.log Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled) "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"zuMnun8BI6SaBP9lo-jn","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Problem Fluentbit forwarded data being thrown into ElasticSearch is throwing the following errors: 2019-05-21 08:57:09 +0000 [warn]: #0 [elasticsearch] failed to flush the buffer. In the first attempt, I was getting these errors: 2022-03-25 18:52:17[2022/03/25 21:52:17] [ warn . hi @yangtian9999 Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [input chunk] update output instances with new chunk size diff=650 Under this scenario what I believe is happening is that the buffer is filled with junk but Fluent . Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [outputes.0] task_id=5 assigned to thread #1 Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [ warn] [engine] failed to flush chunk '1-1648192120.74298017.flb', retry in 10 seconds: task_id=14, input=tail.0 > output=es.0 (out_id=0) Name es Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=104226845 removing file name /var/log/containers/hello-world-dsfcz_argo_wait-3a9bd9a90cc08322e96d0b7bcc9b6aeffd7e5e6a71754073ca1092db862fcfb7.log Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [input chunk] update output instances with new chunk size diff=661 Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [ warn] [engine] failed to flush chunk '1-1648192101.677940929.flb', retry in 21 seconds: task_id=4, input=tail.0 > output=es.0 (out_id=0) "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"k-Mmun8BI6SaBP9l7LpS","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. [2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/argo-server-6d7cf9c977-dlwnk_argo_argo-server-7e1ccfbd60b7539a1b2984f2f46de601d567ce83e87d434e173df195e44b5224.log, inode 101715266 Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=104048677 removing file name /var/log/containers/hello-world-hxn5d_argo_main-ce2dea5b2661227ee3931c554317a97e7b958b46d79031f1c48b840cd10b3d78.log Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. . I am wondering that I should update es version to the latest 7 version. I had similar issues with failed to flush chunk in fluent-bit logs, and eventually figured out that the index I was trying to send logs to already had a _type set to doc, while fluent-bit was trying to send with _type set to _doc (which is the default). [2022/03/24 04:20:51] [debug] [out coro] cb_destroy coro_id=6 Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fluentbit gets stuck [multiple issues] #3581 - Github Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. [2022/03/24 04:21:20] [debug] [input:tail:tail.0] scanning path /var/log/containers/.log Trace information is scarce. Unable to troubleshoot output issues Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=3076476 watch_fd=10 Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [out coro] cb_destroy coro_id=6 [2022/03/24 04:21:20] [debug] [input:tail:tail.0] purge: monitored file has been deleted: /var/log/containers/hello-world-g74nr_argo_wait-227a0fdb4663e03fecebe61f7b6bfb6fdd2867292cacfe692dc15d50a73f29ff.log Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. [2022/03/24 04:19:24] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fluentd does not handle a large number of chunks well when starting up, so that can be a problem as well. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"Z-Mnun8BI6SaBP9lLtm1","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [retry] new retry created for task_id=20 attempts=1 Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [http_client] not using http_proxy for header Have a question about this project? [2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/helm-install-traefik-j2ncv_kube-system_helm-4554d6945ad4a135678c69aae3fb44bf003479edc450b256421a51ce68a37c59.log, inode 622082 Collect kubernetes logs with fluentbit and elasticsearch - GitHub Pages {"took":3473,"errors":true,"items":[{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"2-Mmun8BI6SaBP9luq99","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [input chunk] update output instances with new chunk size diff=695 {"took":2250,"errors":true,"items":[{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"-uMmun8BI6SaBP9l_8nZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. [2022/03/24 04:20:26] [debug] [upstream] KA connection #102 to 10.3.4.84:9200 is now available I am getting these errors during ES logging using fluentd. Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=3076476 events: IN_ATTRIB "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"eOMnun8BI6SaBP9l3vN-","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. The number of log records that this output instance has successfully sent. Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [input chunk] update output instances with new chunk size diff=1083 NoCredentialProviders Issue #483 grafana/loki GitHub Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [ warn] [engine] failed to flush chunk '1-1648192108.829100670.flb', retry in 8 seconds: task_id=7, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 has been assigned (recycled) Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 is now available Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. [2022/03/22 03:57:48] [ warn] [engine] failed to flush chunk '1-1647920969.178403746.flb', retry in 130 seconds: task_id=774, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:42 pm | [2022/03/25 07:08:42] [ warn] [engine] failed to flush chunk '1-1648192122.113977737.flb', retry in 7 seconds: task_id=16, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [input chunk] update output instances with new chunk size diff=1083 Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 is now available "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"3uMmun8BI6SaBP9luq99","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=69479190 file has been deleted: /var/log/containers/hello-world-dsfcz_argo_main-13bb1b2c7e9d3e70003814aa3900bb9aef645cf5e3270e3ee4db0988240b9eff.log Fri, Mar 25 2022 3:08:44 pm | [2022/03/25 07:08:44] [debug] [out coro] cb_destroy coro_id=16 While trying to solve the issue mentioned here: #1502 I was able to connect to our kubernetes node and apply there the tune2fs -O large_dir /dev/sda. Otherwise, share steps to reproduce, including your config. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"BOMmun8BI6SaBP9l_8rZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. [2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_glob add(): /var/log/containers/hello-world-wpr5j_argo_main-55a61ed18250cc1e46ac98d918072e16dab1c6a73f7f9cf0a5dd096959cf6964.log, inode 35326802 Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [out coro] cb_destroy coro_id=5 {"took":1923,"errors":true,"items":[{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"HeMoun8BI6SaBP9lIP7t","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance].
Robert Hall Clothing Cleveland Ohio,
Unable To Send Pictures On Viber Desktop,
Articles F